Nov 25 08:11:05 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 08:11:05 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:05 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 08:11:06 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 08:11:06 crc kubenswrapper[4760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 08:11:06 crc kubenswrapper[4760]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 08:11:06 crc kubenswrapper[4760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 08:11:06 crc kubenswrapper[4760]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 08:11:06 crc kubenswrapper[4760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 08:11:06 crc kubenswrapper[4760]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.705534 4760 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708909 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708933 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708938 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708942 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708946 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708950 4760 feature_gate.go:330] unrecognized feature gate: Example Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708954 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708959 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708963 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708967 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708972 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708978 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708984 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.708989 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709002 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709006 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709010 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709014 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709019 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709024 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709029 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709033 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709037 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709041 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709045 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709049 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709053 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709057 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709061 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709064 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709068 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709072 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709075 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709079 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709083 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709086 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709089 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709099 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709102 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709106 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709111 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709115 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709118 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709122 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709126 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709130 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709134 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709138 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709141 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709146 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709151 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709154 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709158 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709162 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709165 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709169 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709172 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709175 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709179 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709183 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709188 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709193 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709196 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709200 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709204 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709207 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709211 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709215 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709219 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709222 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.709226 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709822 4760 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709836 4760 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709854 4760 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709860 4760 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709867 4760 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709873 4760 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709879 4760 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709888 4760 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709893 4760 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709897 4760 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709902 4760 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709907 4760 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709912 4760 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709916 4760 flags.go:64] FLAG: --cgroup-root="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709921 4760 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709926 4760 flags.go:64] FLAG: --client-ca-file="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709930 4760 flags.go:64] FLAG: --cloud-config="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709934 4760 flags.go:64] FLAG: --cloud-provider="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709939 4760 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709947 4760 flags.go:64] FLAG: --cluster-domain="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709952 4760 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709956 4760 flags.go:64] FLAG: --config-dir="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709960 4760 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709965 4760 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709971 4760 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709975 4760 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709979 4760 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709983 4760 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709988 4760 flags.go:64] FLAG: --contention-profiling="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709992 4760 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.709996 4760 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710000 4760 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710004 4760 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710009 4760 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710013 4760 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710017 4760 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710022 4760 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710026 4760 flags.go:64] FLAG: --enable-server="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710039 4760 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710049 4760 flags.go:64] FLAG: --event-burst="100" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710054 4760 flags.go:64] FLAG: --event-qps="50" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710058 4760 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710062 4760 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710066 4760 flags.go:64] FLAG: --eviction-hard="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710071 4760 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710075 4760 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710079 4760 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710083 4760 flags.go:64] FLAG: --eviction-soft="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710087 4760 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710091 4760 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710096 4760 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710102 4760 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710109 4760 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710119 4760 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710125 4760 flags.go:64] FLAG: --feature-gates="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710132 4760 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710138 4760 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710143 4760 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710148 4760 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710154 4760 flags.go:64] FLAG: --healthz-port="10248" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710159 4760 flags.go:64] FLAG: --help="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710164 4760 flags.go:64] FLAG: --hostname-override="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710168 4760 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710173 4760 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710178 4760 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710182 4760 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710187 4760 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710192 4760 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710197 4760 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710201 4760 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710205 4760 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710212 4760 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710217 4760 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710221 4760 flags.go:64] FLAG: --kube-reserved="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710234 4760 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710238 4760 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710257 4760 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710261 4760 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710265 4760 flags.go:64] FLAG: --lock-file="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710270 4760 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710275 4760 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710279 4760 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710286 4760 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710290 4760 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710294 4760 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710299 4760 flags.go:64] FLAG: --logging-format="text" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710302 4760 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710307 4760 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710311 4760 flags.go:64] FLAG: --manifest-url="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710316 4760 flags.go:64] FLAG: --manifest-url-header="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710322 4760 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710326 4760 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710332 4760 flags.go:64] FLAG: --max-pods="110" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710336 4760 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710340 4760 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710344 4760 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710355 4760 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710359 4760 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710363 4760 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710367 4760 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710381 4760 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710386 4760 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710390 4760 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710396 4760 flags.go:64] FLAG: --pod-cidr="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710401 4760 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710409 4760 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710414 4760 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710419 4760 flags.go:64] FLAG: --pods-per-core="0" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710424 4760 flags.go:64] FLAG: --port="10250" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710429 4760 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710441 4760 flags.go:64] FLAG: --provider-id="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710445 4760 flags.go:64] FLAG: --qos-reserved="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710450 4760 flags.go:64] FLAG: --read-only-port="10255" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710455 4760 flags.go:64] FLAG: --register-node="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710460 4760 flags.go:64] FLAG: --register-schedulable="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710464 4760 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710472 4760 flags.go:64] FLAG: --registry-burst="10" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710476 4760 flags.go:64] FLAG: --registry-qps="5" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710480 4760 flags.go:64] FLAG: --reserved-cpus="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710484 4760 flags.go:64] FLAG: --reserved-memory="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710490 4760 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710494 4760 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710498 4760 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710502 4760 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710506 4760 flags.go:64] FLAG: --runonce="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710510 4760 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710514 4760 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710518 4760 flags.go:64] FLAG: --seccomp-default="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710522 4760 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710526 4760 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710531 4760 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710535 4760 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710539 4760 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710544 4760 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710548 4760 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710555 4760 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710558 4760 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710563 4760 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710567 4760 flags.go:64] FLAG: --system-cgroups="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710571 4760 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710577 4760 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710581 4760 flags.go:64] FLAG: --tls-cert-file="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710584 4760 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710593 4760 flags.go:64] FLAG: --tls-min-version="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710597 4760 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710600 4760 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710611 4760 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710620 4760 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710629 4760 flags.go:64] FLAG: --v="2" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710637 4760 flags.go:64] FLAG: --version="false" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710645 4760 flags.go:64] FLAG: --vmodule="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710651 4760 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.710657 4760 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711036 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711043 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711047 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711052 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711057 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711062 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711067 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711071 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711075 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711079 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711083 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711087 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711091 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711096 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711107 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711114 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711119 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711124 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711129 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711133 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711138 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711142 4760 feature_gate.go:330] unrecognized feature gate: Example Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711146 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711150 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711153 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711157 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711167 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711170 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711174 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711186 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711190 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711194 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711197 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711201 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711205 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711208 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711211 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711215 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711219 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711222 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711226 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711229 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711233 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711236 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711241 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711261 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711266 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711270 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711274 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711279 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711283 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711286 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711290 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711293 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711297 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711300 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711303 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711307 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711310 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711314 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711318 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711322 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711326 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711330 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711335 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711350 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711354 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711357 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711361 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711364 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.711367 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.711382 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.724606 4760 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.724662 4760 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724763 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724773 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724780 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724788 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724796 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724803 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724809 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724815 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724821 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724827 4760 feature_gate.go:330] unrecognized feature gate: Example Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724832 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724838 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724843 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724848 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724853 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724860 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724866 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724873 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724880 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724885 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724892 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724898 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724904 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724918 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724928 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724935 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724942 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724948 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724953 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724965 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724971 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724977 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724982 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724987 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724992 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.724997 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725003 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725008 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725013 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725018 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725024 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725030 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725036 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725042 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725047 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725056 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725064 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725070 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725076 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725081 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725086 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725091 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725098 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725104 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725110 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725115 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725121 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725126 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725132 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725137 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725143 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725147 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725153 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725158 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725164 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725169 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725174 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725179 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725184 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725189 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725195 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.725205 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725421 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725439 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725445 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725451 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725457 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725462 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725467 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725472 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725478 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725483 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725489 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725495 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725500 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725506 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725512 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725518 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725525 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725531 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725537 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725543 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725549 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725554 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725559 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725565 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725571 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725576 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725581 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725586 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725592 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725597 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725602 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725607 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725612 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725618 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725625 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725632 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725637 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725643 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725648 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725654 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725660 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725665 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725670 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725676 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725681 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725686 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725691 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725697 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725702 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725707 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725713 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725721 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725726 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725732 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725738 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725744 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725750 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725756 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725761 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725767 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725772 4760 feature_gate.go:330] unrecognized feature gate: Example Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725777 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725782 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725787 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725792 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725799 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725804 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725809 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725815 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725820 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.725827 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.725836 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.726120 4760 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.732294 4760 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.732784 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.735585 4760 server.go:997] "Starting client certificate rotation" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.735623 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.736744 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-28 20:03:38.724887735 +0000 UTC Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.736859 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 83h52m31.98803547s for next certificate rotation Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.758569 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.760981 4760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.778830 4760 log.go:25] "Validated CRI v1 runtime API" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.814698 4760 log.go:25] "Validated CRI v1 image API" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.817109 4760 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.824359 4760 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-08-06-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.824431 4760 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.852465 4760 manager.go:217] Machine: {Timestamp:2025-11-25 08:11:06.849931618 +0000 UTC m=+0.558962433 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6bb7addf-227a-4139-b3ea-9499fe12a177 BootID:b7123858-d8c0-4a9e-a959-9447d279982b Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a3:28:35 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a3:28:35 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fd:f7:96 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:54:ca:65 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b9:30:5d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:83:e6:1d Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:73:db:50 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9e:9b:a0:ec:20:61 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:aa:c4:cd:2a:37 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.852827 4760 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.853220 4760 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.853750 4760 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.853945 4760 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.853992 4760 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.854238 4760 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.854265 4760 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.854783 4760 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.854819 4760 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.855084 4760 state_mem.go:36] "Initialized new in-memory state store" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.855655 4760 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.860957 4760 kubelet.go:418] "Attempting to sync node with API server" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.861018 4760 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.861054 4760 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.861069 4760 kubelet.go:324] "Adding apiserver pod source" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.861085 4760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.866029 4760 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.867677 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.869335 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.869364 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.869527 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.869431 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.871495 4760 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873190 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873232 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873272 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873285 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873307 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873320 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873334 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873355 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873370 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873383 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873428 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.873443 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.874531 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.875174 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.875672 4760 server.go:1280] "Started kubelet" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.875766 4760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.876753 4760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.877700 4760 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.877708 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.877803 4760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.877813 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:09:13.639456655 +0000 UTC Nov 25 08:11:06 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.877899 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.877950 4760 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.877967 4760 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.878044 4760 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.879137 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.879285 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.880079 4760 factory.go:55] Registering systemd factory Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.880162 4760 factory.go:221] Registration of the systemd container factory successfully Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.880537 4760 factory.go:153] Registering CRI-O factory Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.880615 4760 factory.go:221] Registration of the crio container factory successfully Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.880738 4760 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.880848 4760 factory.go:103] Registering Raw factory Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.880920 4760 manager.go:1196] Started watching for new ooms in manager Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.889375 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="200ms" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.890203 4760 server.go:460] "Adding debug handlers to kubelet server" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.890300 4760 manager.go:319] Starting recovery of all containers Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.894596 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b31a5b8368274 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 08:11:06.8756261 +0000 UTC m=+0.584656905,LastTimestamp:2025-11-25 08:11:06.8756261 +0000 UTC m=+0.584656905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900553 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900634 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900648 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900662 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900674 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900684 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900696 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900708 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900720 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900730 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900744 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900758 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900772 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900787 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900818 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900829 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900844 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900855 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900868 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900881 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900894 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900904 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900915 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900924 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900937 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900951 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900971 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900985 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.900998 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901011 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901022 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901033 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901047 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901058 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901070 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901080 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901093 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901104 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901115 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901126 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901139 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901151 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901163 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901175 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901188 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901199 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901209 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901220 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901232 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901266 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901278 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901292 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901308 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901320 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901332 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901344 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901354 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901366 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901375 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901385 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901393 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901403 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901413 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901424 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901436 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901446 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901457 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901468 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901479 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901490 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901502 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901513 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901524 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901534 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901546 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901557 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901575 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901590 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901602 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901612 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901625 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901636 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901646 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901657 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901667 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901677 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901689 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901705 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901715 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901727 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901739 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901749 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901759 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901769 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901780 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901793 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901803 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901813 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901824 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901836 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901846 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901855 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901864 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901873 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901889 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901901 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901913 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901929 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901944 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.901989 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902000 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902012 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902026 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902037 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902047 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902057 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902068 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902078 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902092 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902128 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902138 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902149 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902161 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902172 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902182 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902193 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902203 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902215 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902242 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902268 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902281 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902291 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902303 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902313 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902325 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902337 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902352 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902366 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902380 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902391 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902402 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902411 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902425 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902437 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902453 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902466 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902478 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902488 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902498 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902508 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902517 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902531 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902541 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902552 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902563 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902571 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902582 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902595 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902606 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902616 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902677 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902688 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902698 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902706 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902715 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902727 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902736 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902746 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902756 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902766 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902776 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902787 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902799 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902811 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902822 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902835 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902848 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902860 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902870 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902881 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902892 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902902 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902914 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902930 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902940 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902951 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902962 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902974 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902986 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.902996 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903007 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903026 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903050 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903064 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903078 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903094 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903109 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.903123 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905816 4760 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905845 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905860 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905877 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905890 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905907 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905918 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905929 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905940 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905951 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905961 4760 reconstruct.go:97] "Volume reconstruction finished" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.905969 4760 reconciler.go:26] "Reconciler: start to sync state" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.911671 4760 manager.go:324] Recovery completed Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.922597 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.924797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.924841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.924853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.925911 4760 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.925937 4760 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.925963 4760 state_mem.go:36] "Initialized new in-memory state store" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.935403 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.937046 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.937085 4760 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.937116 4760 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.937163 4760 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 08:11:06 crc kubenswrapper[4760]: W1125 08:11:06.938559 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.938610 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.941886 4760 policy_none.go:49] "None policy: Start" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.943379 4760 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 08:11:06 crc kubenswrapper[4760]: I1125 08:11:06.943419 4760 state_mem.go:35] "Initializing new in-memory state store" Nov 25 08:11:06 crc kubenswrapper[4760]: E1125 08:11:06.978494 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.000745 4760 manager.go:334] "Starting Device Plugin manager" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.000840 4760 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.000861 4760 server.go:79] "Starting device plugin registration server" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.001343 4760 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.001366 4760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.001571 4760 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.001673 4760 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.001686 4760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.007487 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.037471 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.037611 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.038894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.038951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.038970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.039211 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.039988 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.040056 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.040345 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.040383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.040398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.040491 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.040563 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.040609 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041345 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041585 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041732 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.041773 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042526 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042586 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.042611 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043491 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.043516 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.044570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.044592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.044602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.090641 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="400ms" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.101829 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.103017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.103051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.103062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.103085 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.103621 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.108827 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.108877 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.108908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.108932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.108955 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.108973 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109076 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109300 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109322 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109337 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.109351 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210442 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210513 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210535 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210572 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210633 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210646 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210682 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210775 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210643 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210802 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210817 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210885 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210887 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210900 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.210953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.304448 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.305586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.305638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.305649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.305668 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.306189 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.371535 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.389827 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.394844 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.413978 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: W1125 08:11:07.418812 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1840a8ca364e47693fc4d9be86390db2451982e0fdb548f2ba2e4638ca931522 WatchSource:0}: Error finding container 1840a8ca364e47693fc4d9be86390db2451982e0fdb548f2ba2e4638ca931522: Status 404 returned error can't find the container with id 1840a8ca364e47693fc4d9be86390db2451982e0fdb548f2ba2e4638ca931522 Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.421682 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:07 crc kubenswrapper[4760]: W1125 08:11:07.426566 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f57a626489a4999a071a9bee1c646371ec4d55422f1225660e0023da49b623c7 WatchSource:0}: Error finding container f57a626489a4999a071a9bee1c646371ec4d55422f1225660e0023da49b623c7: Status 404 returned error can't find the container with id f57a626489a4999a071a9bee1c646371ec4d55422f1225660e0023da49b623c7 Nov 25 08:11:07 crc kubenswrapper[4760]: W1125 08:11:07.443025 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ff1dc9eacd02bdc3b3c1df95cbecc6d5dbd8d1a360a7a1d88a8fe17a298da023 WatchSource:0}: Error finding container ff1dc9eacd02bdc3b3c1df95cbecc6d5dbd8d1a360a7a1d88a8fe17a298da023: Status 404 returned error can't find the container with id ff1dc9eacd02bdc3b3c1df95cbecc6d5dbd8d1a360a7a1d88a8fe17a298da023 Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.491832 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="800ms" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.707302 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.709011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.709052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.709062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.709086 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.709629 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Nov 25 08:11:07 crc kubenswrapper[4760]: W1125 08:11:07.781158 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.781267 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:07 crc kubenswrapper[4760]: W1125 08:11:07.816261 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:07 crc kubenswrapper[4760]: E1125 08:11:07.816361 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.876309 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.878329 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:14:45.610869855 +0000 UTC Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.878401 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 228h3m37.732472188s for next certificate rotation Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.942403 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f27cac9f3188372cf84178432abbe22b7220364bc6438438b9f280ef47f59a8a"} Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.943532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f57a626489a4999a071a9bee1c646371ec4d55422f1225660e0023da49b623c7"} Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.944552 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1840a8ca364e47693fc4d9be86390db2451982e0fdb548f2ba2e4638ca931522"} Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.945708 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ff1dc9eacd02bdc3b3c1df95cbecc6d5dbd8d1a360a7a1d88a8fe17a298da023"} Nov 25 08:11:07 crc kubenswrapper[4760]: I1125 08:11:07.946695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e0fb6db659f422ad9f33b07da559bfc5c435026dd1ed0476d23acff6cb4b2cc"} Nov 25 08:11:08 crc kubenswrapper[4760]: W1125 08:11:08.065714 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:08 crc kubenswrapper[4760]: E1125 08:11:08.065817 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:08 crc kubenswrapper[4760]: W1125 08:11:08.279004 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:08 crc kubenswrapper[4760]: E1125 08:11:08.279122 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:08 crc kubenswrapper[4760]: E1125 08:11:08.292867 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="1.6s" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.509777 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.511215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.511287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.511306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.511335 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 08:11:08 crc kubenswrapper[4760]: E1125 08:11:08.512002 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.877034 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.960008 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.960063 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.960073 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.960087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.960108 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.961241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.961301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.961317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.963159 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0" exitCode=0 Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.963284 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.963489 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.964775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.964809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.964822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.967559 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.969770 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.969828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.969843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.977148 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46" exitCode=0 Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.977271 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.977386 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.978650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.978684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.978695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.980763 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e4549d34f0a64cd73b6e0c7155b9d08507cd6fa52d606800e4fd1859a9d54c2a" exitCode=0 Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.980878 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.980885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e4549d34f0a64cd73b6e0c7155b9d08507cd6fa52d606800e4fd1859a9d54c2a"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.981945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.981973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.981982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.983408 4760 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051" exitCode=0 Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.983477 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051"} Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.983529 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.985780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.985809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:08 crc kubenswrapper[4760]: I1125 08:11:08.985821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:09 crc kubenswrapper[4760]: W1125 08:11:09.736112 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:09 crc kubenswrapper[4760]: E1125 08:11:09.736613 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.876574 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:09 crc kubenswrapper[4760]: E1125 08:11:09.893829 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="3.2s" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.988217 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.988201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.988377 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.988394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.989374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.989410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.989422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.991506 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.991568 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.991563 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.991597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.991606 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.991615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.992898 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.992934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.992947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.993766 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825" exitCode=0 Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.994055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.994458 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.995491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.995534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.995547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.996195 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3b7d3921db01c5969f16ede70d3ff767417330f708b885d315e3ea1b4cc155f1"} Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.996238 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.996242 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.998603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.998617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.998640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.998651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.998641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:09 crc kubenswrapper[4760]: I1125 08:11:09.999072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:10 crc kubenswrapper[4760]: I1125 08:11:10.112753 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:10 crc kubenswrapper[4760]: I1125 08:11:10.113941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:10 crc kubenswrapper[4760]: I1125 08:11:10.114110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:10 crc kubenswrapper[4760]: I1125 08:11:10.114122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:10 crc kubenswrapper[4760]: I1125 08:11:10.114154 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 08:11:10 crc kubenswrapper[4760]: E1125 08:11:10.114713 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Nov 25 08:11:10 crc kubenswrapper[4760]: W1125 08:11:10.143976 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:10 crc kubenswrapper[4760]: E1125 08:11:10.144047 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:10 crc kubenswrapper[4760]: I1125 08:11:10.178457 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:10 crc kubenswrapper[4760]: W1125 08:11:10.231379 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Nov 25 08:11:10 crc kubenswrapper[4760]: E1125 08:11:10.231496 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.000906 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1" exitCode=0 Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.001046 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.001076 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.001704 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1"} Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.001784 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.001854 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.001875 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002315 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.003000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.003011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.002982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.003070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:11 crc kubenswrapper[4760]: I1125 08:11:11.452653 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.007477 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7"} Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.007534 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.007583 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.007541 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8"} Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.007699 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb"} Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.007715 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1"} Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.007724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34"} Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.008621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.008669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.008679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.009850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.009930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.009948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.089486 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.089697 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.090905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.090936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:12 crc kubenswrapper[4760]: I1125 08:11:12.090947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.008597 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.011110 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.011155 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.012884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.012931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.012944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.013733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.013776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.013788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.257497 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.257732 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.259189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.259305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.259331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.265085 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.315531 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.317354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.317406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.317437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.317475 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 08:11:13 crc kubenswrapper[4760]: I1125 08:11:13.395199 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.014036 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.014156 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.014209 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.015987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.846081 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.846344 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.847563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.847592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:14 crc kubenswrapper[4760]: I1125 08:11:14.847601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:15 crc kubenswrapper[4760]: I1125 08:11:15.089709 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 08:11:15 crc kubenswrapper[4760]: I1125 08:11:15.089809 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:11:16 crc kubenswrapper[4760]: I1125 08:11:16.389045 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:16 crc kubenswrapper[4760]: I1125 08:11:16.389877 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:11:16 crc kubenswrapper[4760]: I1125 08:11:16.389991 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:16 crc kubenswrapper[4760]: I1125 08:11:16.391681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:16 crc kubenswrapper[4760]: I1125 08:11:16.391721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:16 crc kubenswrapper[4760]: I1125 08:11:16.391730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:16 crc kubenswrapper[4760]: I1125 08:11:16.828426 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:17 crc kubenswrapper[4760]: E1125 08:11:17.007566 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 08:11:17 crc kubenswrapper[4760]: I1125 08:11:17.021182 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:17 crc kubenswrapper[4760]: I1125 08:11:17.022012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:17 crc kubenswrapper[4760]: I1125 08:11:17.022085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:17 crc kubenswrapper[4760]: I1125 08:11:17.022097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.182481 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.183068 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.351757 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.351842 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.355569 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.355678 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.750346 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.751078 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.752346 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.752424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:20 crc kubenswrapper[4760]: I1125 08:11:20.752442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:21 crc kubenswrapper[4760]: I1125 08:11:21.459199 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]log ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]etcd ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/priority-and-fairness-filter ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-apiextensions-informers ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-apiextensions-controllers ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/crd-informer-synced ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-system-namespaces-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 25 08:11:21 crc kubenswrapper[4760]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/bootstrap-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/start-kube-aggregator-informers ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/apiservice-registration-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/apiservice-discovery-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]autoregister-completion ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/apiservice-openapi-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 25 08:11:21 crc kubenswrapper[4760]: livez check failed Nov 25 08:11:21 crc kubenswrapper[4760]: I1125 08:11:21.459273 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:11:21 crc kubenswrapper[4760]: I1125 08:11:21.488061 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 08:11:21 crc kubenswrapper[4760]: I1125 08:11:21.488134 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.090446 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.090519 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 08:11:25 crc kubenswrapper[4760]: E1125 08:11:25.326074 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.328664 4760 trace.go:236] Trace[518730636]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 08:11:13.718) (total time: 11609ms): Nov 25 08:11:25 crc kubenswrapper[4760]: Trace[518730636]: ---"Objects listed" error: 11609ms (08:11:25.328) Nov 25 08:11:25 crc kubenswrapper[4760]: Trace[518730636]: [11.609633915s] [11.609633915s] END Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.328715 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.333574 4760 trace.go:236] Trace[1885896823]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 08:11:14.049) (total time: 11283ms): Nov 25 08:11:25 crc kubenswrapper[4760]: Trace[1885896823]: ---"Objects listed" error: 11283ms (08:11:25.333) Nov 25 08:11:25 crc kubenswrapper[4760]: Trace[1885896823]: [11.283708613s] [11.283708613s] END Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.333619 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.335989 4760 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.336113 4760 trace.go:236] Trace[2134286831]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 08:11:10.911) (total time: 14424ms): Nov 25 08:11:25 crc kubenswrapper[4760]: Trace[2134286831]: ---"Objects listed" error: 14424ms (08:11:25.335) Nov 25 08:11:25 crc kubenswrapper[4760]: Trace[2134286831]: [14.424781435s] [14.424781435s] END Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.336140 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.336639 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 08:11:25 crc kubenswrapper[4760]: E1125 08:11:25.352957 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.872846 4760 apiserver.go:52] "Watching apiserver" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.879141 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.879512 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.879955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.880074 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.880125 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:25 crc kubenswrapper[4760]: E1125 08:11:25.880226 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:25 crc kubenswrapper[4760]: E1125 08:11:25.880315 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.880632 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.880879 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:25 crc kubenswrapper[4760]: E1125 08:11:25.880913 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.880921 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.882347 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.882430 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.883442 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.883507 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.883701 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.884214 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.885526 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.885634 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.886973 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.920184 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.967016 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.978745 4760 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 08:11:25 crc kubenswrapper[4760]: I1125 08:11:25.994677 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.005994 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.015703 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.026889 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.039397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.039681 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.039775 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.039857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.039961 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040047 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040123 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040230 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.039762 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040185 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040377 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040328 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040351 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040575 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040591 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040814 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040915 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040994 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041063 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041209 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041298 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041378 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041452 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041522 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.040875 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041110 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041301 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041310 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041554 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041786 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041810 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041834 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.041927 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042110 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042292 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042575 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042590 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042756 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042863 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042960 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.042986 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043101 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043168 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043193 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043217 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043279 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043300 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043323 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043344 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043346 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043380 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043397 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043414 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043430 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043480 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043496 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043542 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043557 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043573 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043590 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043623 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043667 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043704 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043720 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043755 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043772 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043786 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043810 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043825 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043851 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043879 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043913 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043924 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043939 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043954 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043983 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044000 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044016 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044059 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044089 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044105 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044120 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044135 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044150 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044181 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044211 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044250 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.044428 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:11:26.544406254 +0000 UTC m=+20.253437099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044830 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044875 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044899 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.043085 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044924 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044970 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044995 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045047 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045068 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045088 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045107 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045128 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045149 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045170 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045191 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045212 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045233 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045272 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045294 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045317 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045338 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045362 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045383 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045407 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045455 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045508 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045529 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045550 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045573 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045579 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045590 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045636 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045665 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045690 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045721 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045772 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045831 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045858 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045882 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045933 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045956 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046005 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046029 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046053 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046076 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046102 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046149 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046198 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046221 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046243 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046305 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046327 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046352 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046401 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046429 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046500 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046546 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046570 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046620 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046641 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046669 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046717 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046763 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046785 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046810 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046837 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046862 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046910 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046952 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046980 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047002 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047048 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047842 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047906 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047932 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048004 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048026 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048052 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048111 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048137 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048162 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048186 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048213 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048302 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048360 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048451 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048535 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048811 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048830 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048846 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048859 4760 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048873 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048886 4760 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048898 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048908 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048921 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048934 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048948 4760 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048991 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049006 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049019 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049032 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049045 4760 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049058 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049108 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049120 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049134 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049147 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049160 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049173 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049186 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049201 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049216 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.053156 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.045938 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046104 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.053917 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.054148 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.054181 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046204 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046333 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046350 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.054487 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.044459 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046380 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046782 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047107 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047215 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.046601 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047477 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.054589 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047585 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047653 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047670 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.047854 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048413 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048667 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048937 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.048948 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049276 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049424 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049609 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.049734 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050136 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050293 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050412 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.050943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.051248 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.051450 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.051501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.054905 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.051585 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.051599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.051857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052180 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052245 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052525 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052755 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052819 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.052874 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.053440 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.053442 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.053690 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055046 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055086 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055238 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055536 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.055975 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.056242 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057029 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057182 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057738 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.057959 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.058018 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.058399 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.058596 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.058899 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b" exitCode=255 Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.058935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b"} Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059068 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059104 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059110 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059213 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.059228 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059311 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059454 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.059317 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:26.559296533 +0000 UTC m=+20.268327428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059543 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059611 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059986 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060732 4760 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.061113 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060701 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060843 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.060928 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.061014 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.061195 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.061273 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.061273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.061422 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.061927 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.062096 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.059534 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.064685 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:26.564663945 +0000 UTC m=+20.273694740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.065998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.066331 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.066588 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.066662 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.066870 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.066891 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.067347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.067403 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.067527 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.067629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.068279 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.068364 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.070724 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.074385 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.074638 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.074653 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.074664 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.074716 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:26.574696867 +0000 UTC m=+20.283727662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.075902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.076167 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.075392 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.076352 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.076446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.076794 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.077397 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.077412 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.078055 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.079230 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.079439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.079566 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.079977 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.080462 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.080562 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.080669 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.080884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.080934 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.080997 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:26.580969924 +0000 UTC m=+20.290000719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.080999 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.081628 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.081721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.082072 4760 scope.go:117] "RemoveContainer" containerID="cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.084237 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.085368 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.086165 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.086492 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.086730 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.087107 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.088232 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.088514 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.091080 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.091314 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.091444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.091784 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.092359 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.092556 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.092920 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.092958 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.093205 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.094703 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.094736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.095165 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.095736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.096285 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.097849 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.100292 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.100889 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.102480 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.103101 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.103296 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.103609 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.103837 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.104046 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.104385 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.104626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.105544 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.105671 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.107533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.109204 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.118778 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.125990 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.136028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.137708 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.147699 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.156965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157116 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157130 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157141 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157154 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157164 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157174 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157208 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157219 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157229 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157240 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157251 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157291 4760 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157304 4760 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157314 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157326 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157337 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157347 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157359 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157368 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157378 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157387 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157399 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157409 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157418 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157430 4760 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157442 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157453 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157464 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157475 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157486 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157499 4760 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157510 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157563 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.157520 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158207 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158222 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158234 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158265 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158278 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158290 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158301 4760 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158314 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158324 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158334 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158344 4760 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158354 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158363 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158372 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158382 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158393 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158405 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158417 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158428 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158438 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158448 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158459 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158470 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158483 4760 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158494 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158516 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158528 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158538 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158549 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158561 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158572 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158583 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158594 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158606 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158616 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158627 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158640 4760 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158651 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158662 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158674 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158685 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158699 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158710 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158722 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158733 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158742 4760 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158753 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158764 4760 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158774 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158786 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158797 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158808 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158816 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158824 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158832 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158840 4760 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158848 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158856 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158865 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158874 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158881 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158890 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158898 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158906 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158914 4760 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158922 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158930 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158940 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158948 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158959 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158967 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158975 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158983 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.158992 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159000 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159008 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159018 4760 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159027 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159035 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159044 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159052 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159060 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159069 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159079 4760 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159087 4760 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159095 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159102 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159110 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159117 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159125 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159132 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159140 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159148 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159155 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159163 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159170 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159179 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159187 4760 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159203 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159211 4760 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159219 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159228 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159235 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159243 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159267 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159286 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159297 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159306 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159314 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159323 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159331 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159339 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159348 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159356 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159364 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159372 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159380 4760 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159388 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159396 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159404 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159413 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159421 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159429 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159437 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159445 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159459 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159466 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159475 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159483 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159490 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159498 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159506 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159514 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159522 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.159532 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.193795 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.203226 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.208975 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 08:11:26 crc kubenswrapper[4760]: W1125 08:11:26.211901 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-aaed1d4fbba5ceb28ebd651bbf664edb1ac595f4e5f7ef7db085fb4c77d8917e WatchSource:0}: Error finding container aaed1d4fbba5ceb28ebd651bbf664edb1ac595f4e5f7ef7db085fb4c77d8917e: Status 404 returned error can't find the container with id aaed1d4fbba5ceb28ebd651bbf664edb1ac595f4e5f7ef7db085fb4c77d8917e Nov 25 08:11:26 crc kubenswrapper[4760]: W1125 08:11:26.215247 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-23c33fb4b1168c4362ff7c38d0ece33809983d85f4f9b541f5f1e01ac55fa395 WatchSource:0}: Error finding container 23c33fb4b1168c4362ff7c38d0ece33809983d85f4f9b541f5f1e01ac55fa395: Status 404 returned error can't find the container with id 23c33fb4b1168c4362ff7c38d0ece33809983d85f4f9b541f5f1e01ac55fa395 Nov 25 08:11:26 crc kubenswrapper[4760]: W1125 08:11:26.225529 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-08de49c7191de7d86eead39818b5471648004c954791a11377bed77b31aeeda5 WatchSource:0}: Error finding container 08de49c7191de7d86eead39818b5471648004c954791a11377bed77b31aeeda5: Status 404 returned error can't find the container with id 08de49c7191de7d86eead39818b5471648004c954791a11377bed77b31aeeda5 Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.459753 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.474131 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.484230 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.493847 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.503690 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.514230 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.525156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.533801 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.565402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.565499 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.565525 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.565586 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:11:27.565563181 +0000 UTC m=+21.274593976 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.565623 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.565677 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:27.565667164 +0000 UTC m=+21.274697959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.565687 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.565724 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:27.565714086 +0000 UTC m=+21.274744871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.666845 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.666921 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667066 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667087 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667099 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667150 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:27.667133134 +0000 UTC m=+21.376163939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667555 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667579 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667590 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: E1125 08:11:26.667620 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:27.667610448 +0000 UTC m=+21.376641243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.833000 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.845181 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.853325 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.863570 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.875871 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.898740 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.910375 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.941308 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.942004 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.942723 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.942947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.943471 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.944151 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.944662 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.945239 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.947345 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.950535 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.951633 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.952310 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.953209 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.953940 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.954717 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.955533 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.956459 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.957384 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.957903 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.960830 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.961814 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.962293 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.962585 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.964012 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.964751 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.966428 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.967228 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.967966 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.968738 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.969280 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.970597 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.971339 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.972032 4760 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.972872 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.975171 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.976033 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.977287 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.977379 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.979807 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.980775 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.981973 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.982910 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.984359 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.984953 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.986310 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.986872 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.987417 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.987845 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.988363 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.989302 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.989774 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.990904 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.991363 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.992215 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.992677 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.993167 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.994106 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.994666 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 08:11:26 crc kubenswrapper[4760]: I1125 08:11:26.996575 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.006642 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.020855 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.031091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.040088 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.047795 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.065574 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a"} Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.065619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476"} Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.065632 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"23c33fb4b1168c4362ff7c38d0ece33809983d85f4f9b541f5f1e01ac55fa395"} Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.067089 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2"} Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.067398 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"aaed1d4fbba5ceb28ebd651bbf664edb1ac595f4e5f7ef7db085fb4c77d8917e"} Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.068795 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.070615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106"} Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.070837 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.071435 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"08de49c7191de7d86eead39818b5471648004c954791a11377bed77b31aeeda5"} Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.081531 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.087622 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.096576 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.108182 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.119860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.129145 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.136750 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.145864 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.154393 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.164914 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.174345 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.184385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.194094 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.203882 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.213432 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.223946 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.233804 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.572418 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.572490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.572516 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.572611 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.572618 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:11:29.572590553 +0000 UTC m=+23.281621348 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.573609 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.573785 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:29.573752926 +0000 UTC m=+23.282783761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.575015 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:29.5749782 +0000 UTC m=+23.284009025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.676131 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.676195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676502 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676508 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676575 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676594 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676525 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676680 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:29.676646945 +0000 UTC m=+23.385677900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676685 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.676760 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:29.676735498 +0000 UTC m=+23.385766293 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.937926 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.938001 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:27 crc kubenswrapper[4760]: I1125 08:11:27.937935 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.938117 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.938326 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:27 crc kubenswrapper[4760]: E1125 08:11:27.938461 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.087077 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.098210 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.111735 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.124471 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.137652 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.151116 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.161613 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:28 crc kubenswrapper[4760]: I1125 08:11:28.170935 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:28Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.077720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d"} Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.091981 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.102990 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.114505 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.127997 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.138733 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.151064 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.164462 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.177847 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:29Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.595800 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.595896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.595926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.596026 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:11:33.595996397 +0000 UTC m=+27.305027192 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.596046 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.596098 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:33.59608429 +0000 UTC m=+27.305115085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.596117 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.596231 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:33.596209913 +0000 UTC m=+27.305240709 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.697500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.697582 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.697764 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.697797 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.697808 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.697860 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:33.697842738 +0000 UTC m=+27.406873533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.697776 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.697907 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.697924 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.698009 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:33.697985072 +0000 UTC m=+27.407015907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.937928 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.938023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:29 crc kubenswrapper[4760]: I1125 08:11:29.938020 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.938047 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.938149 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:29 crc kubenswrapper[4760]: E1125 08:11:29.938404 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.791805 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.816587 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.818115 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.823035 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:30Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.912420 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:30Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.952050 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:30Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.966774 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:30Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.978842 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:30Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:30 crc kubenswrapper[4760]: I1125 08:11:30.992039 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:30Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.004728 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.018927 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.035029 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.053470 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.082424 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.105334 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.113334 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.114714 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tj64g"] Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.115014 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.116863 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.117357 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.117982 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.130279 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.151322 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.164163 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.177161 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.193948 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.211277 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.230655 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.244140 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.266329 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.278200 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.292520 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.304532 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.310430 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/06641e4d-9c74-4b5c-a664-d4f00118885a-hosts-file\") pod \"node-resolver-tj64g\" (UID: \"06641e4d-9c74-4b5c-a664-d4f00118885a\") " pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.310479 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhd7w\" (UniqueName: \"kubernetes.io/projected/06641e4d-9c74-4b5c-a664-d4f00118885a-kube-api-access-hhd7w\") pod \"node-resolver-tj64g\" (UID: \"06641e4d-9c74-4b5c-a664-d4f00118885a\") " pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.338082 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.369907 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.397911 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.411322 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/06641e4d-9c74-4b5c-a664-d4f00118885a-hosts-file\") pod \"node-resolver-tj64g\" (UID: \"06641e4d-9c74-4b5c-a664-d4f00118885a\") " pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.411371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhd7w\" (UniqueName: \"kubernetes.io/projected/06641e4d-9c74-4b5c-a664-d4f00118885a-kube-api-access-hhd7w\") pod \"node-resolver-tj64g\" (UID: \"06641e4d-9c74-4b5c-a664-d4f00118885a\") " pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.411517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/06641e4d-9c74-4b5c-a664-d4f00118885a-hosts-file\") pod \"node-resolver-tj64g\" (UID: \"06641e4d-9c74-4b5c-a664-d4f00118885a\") " pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.621866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhd7w\" (UniqueName: \"kubernetes.io/projected/06641e4d-9c74-4b5c-a664-d4f00118885a-kube-api-access-hhd7w\") pod \"node-resolver-tj64g\" (UID: \"06641e4d-9c74-4b5c-a664-d4f00118885a\") " pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.728114 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tj64g" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.753285 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.754764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.754802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.754814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.754879 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.761362 4760 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.761615 4760 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.762558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.762588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.762598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.762611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.762620 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:31Z","lastTransitionTime":"2025-11-25T08:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.780432 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.783848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.783884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.783896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.783912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.783925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:31Z","lastTransitionTime":"2025-11-25T08:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.794481 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.798917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.798961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.798970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.798986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.798997 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:31Z","lastTransitionTime":"2025-11-25T08:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.813977 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.820997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.821048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.821060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.821086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.821108 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:31Z","lastTransitionTime":"2025-11-25T08:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.839731 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.845086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.845131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.845142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.845159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.845170 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:31Z","lastTransitionTime":"2025-11-25T08:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.871382 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.871502 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.872844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.872868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.872876 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.872888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.872897 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:31Z","lastTransitionTime":"2025-11-25T08:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.888762 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fcnxs"] Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.889349 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-r4rlz"] Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.889464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.890184 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-x6n7t"] Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.890330 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.890539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-x6n7t" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.892938 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.893100 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.893597 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.893628 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.894015 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.894065 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.894519 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.894631 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 08:11:31 crc kubenswrapper[4760]: W1125 08:11:31.895546 4760 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.895609 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:11:31 crc kubenswrapper[4760]: W1125 08:11:31.896090 4760 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.896148 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.896150 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.897190 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.913044 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.934917 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.937312 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.937312 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.937412 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.937500 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.937619 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:31 crc kubenswrapper[4760]: E1125 08:11:31.937744 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.946861 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.958224 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.969314 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.975384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.975424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.975432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.975444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.975454 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:31Z","lastTransitionTime":"2025-11-25T08:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.980448 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:31 crc kubenswrapper[4760]: W1125 08:11:31.986401 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06641e4d_9c74_4b5c_a664_d4f00118885a.slice/crio-0576b793c7ec7e179053dcd92ba4d0d63fd20056dffb516b159686763cfdec5a WatchSource:0}: Error finding container 0576b793c7ec7e179053dcd92ba4d0d63fd20056dffb516b159686763cfdec5a: Status 404 returned error can't find the container with id 0576b793c7ec7e179053dcd92ba4d0d63fd20056dffb516b159686763cfdec5a Nov 25 08:11:31 crc kubenswrapper[4760]: I1125 08:11:31.992155 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.006694 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.014942 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f5c9247-0023-4cef-8299-ca90407f76f2-proxy-tls\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.014992 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjr5\" (UniqueName: \"kubernetes.io/projected/29261de0-ae0c-4828-afed-e6036aa367cf-kube-api-access-xjjr5\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cni-binary-copy\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015074 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-cni-multus\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbn9\" (UniqueName: \"kubernetes.io/projected/f5366e35-adc6-45e2-966c-55fc7e6c8b79-kube-api-access-7wbn9\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015117 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-conf-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015136 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-system-cni-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015183 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-multus-daemon-config\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-etc-kubernetes\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015283 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-netns\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015307 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-os-release\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015338 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-cnibin\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015357 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-system-cni-dir\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-cni-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015401 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-kubelet\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015423 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-multus-certs\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvdx\" (UniqueName: \"kubernetes.io/projected/2f5c9247-0023-4cef-8299-ca90407f76f2-kube-api-access-wcvdx\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015468 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f5c9247-0023-4cef-8299-ca90407f76f2-mcd-auth-proxy-config\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015497 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-socket-dir-parent\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-k8s-cni-cncf-io\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015539 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cnibin\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-cni-bin\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-os-release\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015601 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-cni-binary-copy\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f5c9247-0023-4cef-8299-ca90407f76f2-rootfs\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.015842 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-hostroot\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.019059 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.031455 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.040601 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.050878 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.066201 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.079262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.079302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.079311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.079325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.079334 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.079486 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.085623 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tj64g" event={"ID":"06641e4d-9c74-4b5c-a664-d4f00118885a","Type":"ContainerStarted","Data":"0576b793c7ec7e179053dcd92ba4d0d63fd20056dffb516b159686763cfdec5a"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.091904 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.103425 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.109142 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.113111 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.115143 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-cni-binary-copy\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-os-release\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f5c9247-0023-4cef-8299-ca90407f76f2-rootfs\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116515 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-hostroot\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116548 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f5c9247-0023-4cef-8299-ca90407f76f2-proxy-tls\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-cni-multus\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjr5\" (UniqueName: \"kubernetes.io/projected/29261de0-ae0c-4828-afed-e6036aa367cf-kube-api-access-xjjr5\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cni-binary-copy\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116602 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-os-release\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f5c9247-0023-4cef-8299-ca90407f76f2-rootfs\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wbn9\" (UniqueName: \"kubernetes.io/projected/f5366e35-adc6-45e2-966c-55fc7e6c8b79-kube-api-access-7wbn9\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116720 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-conf-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-system-cni-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-multus-daemon-config\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-etc-kubernetes\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116777 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-cni-multus\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-cnibin\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-netns\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-os-release\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116854 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-cni-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116867 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-hostroot\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116882 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-kubelet\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-kubelet\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-system-cni-dir\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116936 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcvdx\" (UniqueName: \"kubernetes.io/projected/2f5c9247-0023-4cef-8299-ca90407f76f2-kube-api-access-wcvdx\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-multus-certs\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-socket-dir-parent\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.116986 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-k8s-cni-cncf-io\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117002 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cnibin\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f5c9247-0023-4cef-8299-ca90407f76f2-mcd-auth-proxy-config\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117040 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-cni-bin\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117060 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-conf-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117076 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-var-lib-cni-bin\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117098 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-system-cni-dir\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117179 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-system-cni-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117231 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-etc-kubernetes\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117264 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-multus-certs\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117277 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-cnibin\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-netns\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117302 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-socket-dir-parent\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-host-run-k8s-cni-cncf-io\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-os-release\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117375 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cnibin\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117381 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/29261de0-ae0c-4828-afed-e6036aa367cf-multus-cni-dir\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.117629 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cni-binary-copy\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.118018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f5c9247-0023-4cef-8299-ca90407f76f2-mcd-auth-proxy-config\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.118362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f5366e35-adc6-45e2-966c-55fc7e6c8b79-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.119065 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-cni-binary-copy\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.119898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f5c9247-0023-4cef-8299-ca90407f76f2-proxy-tls\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.122909 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f5366e35-adc6-45e2-966c-55fc7e6c8b79-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.125500 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.136125 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wbn9\" (UniqueName: \"kubernetes.io/projected/f5366e35-adc6-45e2-966c-55fc7e6c8b79-kube-api-access-7wbn9\") pod \"multus-additional-cni-plugins-r4rlz\" (UID: \"f5366e35-adc6-45e2-966c-55fc7e6c8b79\") " pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.137658 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjr5\" (UniqueName: \"kubernetes.io/projected/29261de0-ae0c-4828-afed-e6036aa367cf-kube-api-access-xjjr5\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.137987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcvdx\" (UniqueName: \"kubernetes.io/projected/2f5c9247-0023-4cef-8299-ca90407f76f2-kube-api-access-wcvdx\") pod \"machine-config-daemon-fcnxs\" (UID: \"2f5c9247-0023-4cef-8299-ca90407f76f2\") " pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.144682 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.157111 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.166848 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.178748 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.181939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.181979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.181992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.182031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.182042 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.191954 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.203489 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.209809 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.212421 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" Nov 25 08:11:32 crc kubenswrapper[4760]: W1125 08:11:32.214572 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f5c9247_0023_4cef_8299_ca90407f76f2.slice/crio-ba6a677568c0cbd5b087768e1d86df8dcecb7b29daf2bf5bd804f33f61456b17 WatchSource:0}: Error finding container ba6a677568c0cbd5b087768e1d86df8dcecb7b29daf2bf5bd804f33f61456b17: Status 404 returned error can't find the container with id ba6a677568c0cbd5b087768e1d86df8dcecb7b29daf2bf5bd804f33f61456b17 Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.222634 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: W1125 08:11:32.224576 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5366e35_adc6_45e2_966c_55fc7e6c8b79.slice/crio-9b979d14d6b9997bb957d407c5cfd0b1193649a8035a8e9dcd07d5d733118e21 WatchSource:0}: Error finding container 9b979d14d6b9997bb957d407c5cfd0b1193649a8035a8e9dcd07d5d733118e21: Status 404 returned error can't find the container with id 9b979d14d6b9997bb957d407c5cfd0b1193649a8035a8e9dcd07d5d733118e21 Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.232818 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.245373 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.257232 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c2bhp"] Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.257982 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.262988 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.263069 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.263134 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.263321 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.263547 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.263685 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.263878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.264001 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.283041 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.284448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.284488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.284498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.284511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.284520 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.297536 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.315619 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.328973 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.340624 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.355972 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.367308 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.378999 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.386862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.386908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.386921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.386938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.386950 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.391546 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.403027 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.414467 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.418726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-etc-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.418770 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-systemd-units\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.418849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-ovn\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.418886 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-script-lib\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.418906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-kubelet\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.418929 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.418964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-var-lib-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419125 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/244c5c71-3110-4dcd-89f3-4dadfc405131-ovn-node-metrics-cert\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-slash\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419223 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-netns\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419291 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-systemd\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419312 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-bin\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419328 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk6n\" (UniqueName: \"kubernetes.io/projected/244c5c71-3110-4dcd-89f3-4dadfc405131-kube-api-access-2fk6n\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419379 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-node-log\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419394 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-env-overrides\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-log-socket\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419449 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-netd\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.419523 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-config\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.426529 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.437479 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.449594 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.462273 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.473546 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.491528 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.492895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.492938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.492948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.492963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.492975 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.504462 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.514785 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-var-lib-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/244c5c71-3110-4dcd-89f3-4dadfc405131-ovn-node-metrics-cert\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521285 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-slash\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-netns\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-slash\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-systemd\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-bin\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521381 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-systemd\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521390 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk6n\" (UniqueName: \"kubernetes.io/projected/244c5c71-3110-4dcd-89f3-4dadfc405131-kube-api-access-2fk6n\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521420 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-node-log\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521420 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-netns\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521438 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-env-overrides\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521525 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-log-socket\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-netd\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521569 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-config\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521622 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-etc-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-systemd-units\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521663 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-ovn\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521677 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-script-lib\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-kubelet\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521707 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-bin\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521730 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521774 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-log-socket\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521829 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-netd\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-var-lib-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521856 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-ovn\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521885 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-etc-openvswitch\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521896 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-systemd-units\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-env-overrides\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.521947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-kubelet\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.522357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-config\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.522436 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-script-lib\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.522514 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-node-log\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.524513 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/244c5c71-3110-4dcd-89f3-4dadfc405131-ovn-node-metrics-cert\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.531139 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.543211 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.548900 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk6n\" (UniqueName: \"kubernetes.io/projected/244c5c71-3110-4dcd-89f3-4dadfc405131-kube-api-access-2fk6n\") pod \"ovnkube-node-c2bhp\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.558461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.574862 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.578519 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.595968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.596029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.596041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.596060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.596071 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: W1125 08:11:32.675188 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod244c5c71_3110_4dcd_89f3_4dadfc405131.slice/crio-8806f0460a7db9cbd2ee718905b96b5e8f5048f68ac5117b85d7fe16613e7222 WatchSource:0}: Error finding container 8806f0460a7db9cbd2ee718905b96b5e8f5048f68ac5117b85d7fe16613e7222: Status 404 returned error can't find the container with id 8806f0460a7db9cbd2ee718905b96b5e8f5048f68ac5117b85d7fe16613e7222 Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.698471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.698510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.698519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.698533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.698541 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.801070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.801111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.801123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.801141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.801153 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.878262 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.903039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.903083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.903093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.903106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:32 crc kubenswrapper[4760]: I1125 08:11:32.903116 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:32Z","lastTransitionTime":"2025-11-25T08:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.005612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.005646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.005654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.005667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.005675 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.088971 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"ba6a677568c0cbd5b087768e1d86df8dcecb7b29daf2bf5bd804f33f61456b17"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.090184 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerStarted","Data":"9b979d14d6b9997bb957d407c5cfd0b1193649a8035a8e9dcd07d5d733118e21"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.091517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tj64g" event={"ID":"06641e4d-9c74-4b5c-a664-d4f00118885a","Type":"ContainerStarted","Data":"f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.092302 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"8806f0460a7db9cbd2ee718905b96b5e8f5048f68ac5117b85d7fe16613e7222"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.103010 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.107408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.107449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.107460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.107477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.107489 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.112017 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.117611 4760 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.117702 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-multus-daemon-config podName:29261de0-ae0c-4828-afed-e6036aa367cf nodeName:}" failed. No retries permitted until 2025-11-25 08:11:33.617674988 +0000 UTC m=+27.326705783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-multus-daemon-config") pod "multus-x6n7t" (UID: "29261de0-ae0c-4828-afed-e6036aa367cf") : failed to sync configmap cache: timed out waiting for the condition Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.121877 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.123486 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.133762 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.149227 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.165242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.176756 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.189109 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.201963 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.209907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.209948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.209958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.209975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.209986 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.216154 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.227645 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.240741 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.254527 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.266181 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.311858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.311893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.311903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.311919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.311931 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.414475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.414517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.414526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.414541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.414550 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.517295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.517352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.517369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.517391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.517407 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.620268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.620537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.620549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.620564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.620575 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.632696 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.632785 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.632859 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:11:41.632839097 +0000 UTC m=+35.341869892 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.632894 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-multus-daemon-config\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.632907 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.632921 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.632971 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:41.63294923 +0000 UTC m=+35.341980095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.633017 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.633064 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:41.633055863 +0000 UTC m=+35.342086658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.633560 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/29261de0-ae0c-4828-afed-e6036aa367cf-multus-daemon-config\") pod \"multus-x6n7t\" (UID: \"29261de0-ae0c-4828-afed-e6036aa367cf\") " pod="openshift-multus/multus-x6n7t" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.720360 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-x6n7t" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.722240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.722290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.722302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.722317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.722328 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.733584 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.733652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733764 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733803 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733803 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733818 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733827 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733840 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733877 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:41.733857174 +0000 UTC m=+35.442888019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.733915 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:41.733905275 +0000 UTC m=+35.442936160 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.824276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.824310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.824320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.824334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.824344 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.926850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.926890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.926902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.926916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.926929 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:33Z","lastTransitionTime":"2025-11-25T08:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.937742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.937774 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:33 crc kubenswrapper[4760]: I1125 08:11:33.937813 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.937867 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.937963 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:33 crc kubenswrapper[4760]: E1125 08:11:33.938048 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.028901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.028948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.028957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.028972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.028983 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.096239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerStarted","Data":"c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.096309 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerStarted","Data":"8f3250e07f661c53015df2f2b9fa1cea37d75d14006883127045c9879baa8a27"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.097574 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2" exitCode=0 Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.097618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.099364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.099391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.101088 4760 generic.go:334] "Generic (PLEG): container finished" podID="f5366e35-adc6-45e2-966c-55fc7e6c8b79" containerID="e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d" exitCode=0 Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.101127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerDied","Data":"e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.110555 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.124340 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.130889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.130915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.130925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.130938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.130946 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.144217 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.160978 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.180092 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.195999 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.206927 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.219498 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.233589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.233639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.233651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.233669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.233680 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.235291 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.248037 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.261057 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.273544 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.284272 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.295763 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.307681 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.317455 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.329336 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.336968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.337003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.337015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.337032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.337045 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.341928 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.354443 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.368363 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.381018 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.412351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.424172 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.437869 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.439599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.439619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.439628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.439643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.439653 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.456548 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.474989 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.495631 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.507678 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:34Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.541550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.541582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.541592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.541606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.541617 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.647751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.647812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.647823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.647840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.647854 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.749633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.749960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.750059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.750185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.750308 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.864630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.864671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.864682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.864698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.864709 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.967126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.967161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.967172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.967190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:34 crc kubenswrapper[4760]: I1125 08:11:34.967201 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:34Z","lastTransitionTime":"2025-11-25T08:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.069396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.069433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.069442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.069458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.069466 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.106579 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerStarted","Data":"b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.109711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.109785 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.109801 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.121386 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.134484 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.146524 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.157918 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.167632 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.171459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.171505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.171513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.171528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.171538 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.180587 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.189678 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.206696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.210941 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nlwcx"] Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.211347 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.213539 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.213902 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.214102 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.214434 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.221076 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.235339 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.256444 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.273951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.273988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.273998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.274013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.274024 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.303297 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.324610 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.334551 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.345408 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.352876 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43a69a9-eef6-4091-b9fd-9bc0a283df79-host\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.352914 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d43a69a9-eef6-4091-b9fd-9bc0a283df79-serviceca\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.352937 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6ml2\" (UniqueName: \"kubernetes.io/projected/d43a69a9-eef6-4091-b9fd-9bc0a283df79-kube-api-access-k6ml2\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.356050 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.368964 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.376472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.376500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.376508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.376521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.376530 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.379668 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.391449 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.404777 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.418053 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.431200 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.447130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.453372 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43a69a9-eef6-4091-b9fd-9bc0a283df79-host\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.453415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d43a69a9-eef6-4091-b9fd-9bc0a283df79-serviceca\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.453436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6ml2\" (UniqueName: \"kubernetes.io/projected/d43a69a9-eef6-4091-b9fd-9bc0a283df79-kube-api-access-k6ml2\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.453500 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d43a69a9-eef6-4091-b9fd-9bc0a283df79-host\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.454378 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d43a69a9-eef6-4091-b9fd-9bc0a283df79-serviceca\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.468692 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.470726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6ml2\" (UniqueName: \"kubernetes.io/projected/d43a69a9-eef6-4091-b9fd-9bc0a283df79-kube-api-access-k6ml2\") pod \"node-ca-nlwcx\" (UID: \"d43a69a9-eef6-4091-b9fd-9bc0a283df79\") " pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.478485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.478679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.478790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.478857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.478919 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.486913 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.499947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.510915 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.528290 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.540699 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:35Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.552279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nlwcx" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.581915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.581950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.581959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.581972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.581981 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.684173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.684223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.684235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.684273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.684288 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.786538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.786568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.786578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.786592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.786602 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.889914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.889947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.889956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.889968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.889978 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.938340 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.938407 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.938466 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:35 crc kubenswrapper[4760]: E1125 08:11:35.938579 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:35 crc kubenswrapper[4760]: E1125 08:11:35.938668 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:35 crc kubenswrapper[4760]: E1125 08:11:35.938741 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.992152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.992209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.992220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.992241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:35 crc kubenswrapper[4760]: I1125 08:11:35.992272 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:35Z","lastTransitionTime":"2025-11-25T08:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.095285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.095329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.095339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.095354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.095363 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.117393 4760 generic.go:334] "Generic (PLEG): container finished" podID="f5366e35-adc6-45e2-966c-55fc7e6c8b79" containerID="b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7" exitCode=0 Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.117457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerDied","Data":"b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.118722 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nlwcx" event={"ID":"d43a69a9-eef6-4091-b9fd-9bc0a283df79","Type":"ContainerStarted","Data":"49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.118747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nlwcx" event={"ID":"d43a69a9-eef6-4091-b9fd-9bc0a283df79","Type":"ContainerStarted","Data":"97b89d58b4dfe23d45355bb2b5fc408d5bc7a46f1e793c107915ee5a41af6608"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.123655 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.123690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.123702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.142698 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.164239 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.177136 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.192677 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.197604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.197651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.197663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.197682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.197697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.207713 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.227543 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.253002 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.265663 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.277267 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.288701 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.300359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.300405 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.300416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.300431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.300443 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.304133 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.315689 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.327799 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.344415 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.358387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.370578 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.384447 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.397068 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.402810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.402858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.402877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.402897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.402911 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.409795 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.422656 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.442286 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.463939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.474900 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.503054 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.505040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.505081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.505092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.505107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.505118 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.543698 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.582859 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.609142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.609188 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.609198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.609219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.609235 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.626604 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.665425 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.710372 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.712229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.712311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.712331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.712354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.712372 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.741791 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.815881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.815932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.815948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.815975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.815991 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.919024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.919080 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.919093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.919112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.919125 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:36Z","lastTransitionTime":"2025-11-25T08:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.958843 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.971118 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:36 crc kubenswrapper[4760]: I1125 08:11:36.996524 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.012168 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.020848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.020909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.020921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.020938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.020949 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.028003 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.042023 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.056008 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.079663 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.110105 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.123718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.123816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.123831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.123851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.124163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.128700 4760 generic.go:334] "Generic (PLEG): container finished" podID="f5366e35-adc6-45e2-966c-55fc7e6c8b79" containerID="8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162" exitCode=0 Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.128744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerDied","Data":"8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.145737 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.184649 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.221932 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.228106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.228145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.228156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.228175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.228189 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.262980 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.304371 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.330120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.330350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.330438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.330523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.330623 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.341912 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.392204 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.432415 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.433980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.434015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.434025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.434042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.434053 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.464927 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.505501 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.537044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.537092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.537104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.537120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.537131 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.544928 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.584075 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.626998 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.639654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.639688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.639698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.639712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.639722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.664147 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.704749 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.741423 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.741521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.741551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.741560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.741574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.741597 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.782940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.822170 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.843657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.843692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.843700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.843715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.843724 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.862714 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.901432 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.937557 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.937638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:37 crc kubenswrapper[4760]: E1125 08:11:37.937673 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:37 crc kubenswrapper[4760]: E1125 08:11:37.937787 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.937563 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:37 crc kubenswrapper[4760]: E1125 08:11:37.937926 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.945767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.945801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.945810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.945828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.945839 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:37Z","lastTransitionTime":"2025-11-25T08:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:37 crc kubenswrapper[4760]: I1125 08:11:37.946911 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.049681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.049730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.049747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.049779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.049795 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.134516 4760 generic.go:334] "Generic (PLEG): container finished" podID="f5366e35-adc6-45e2-966c-55fc7e6c8b79" containerID="ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84" exitCode=0 Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.134570 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerDied","Data":"ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.138993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.146638 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.151569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.151597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.151609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.151625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.151636 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.158122 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.170618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.181038 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.192319 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.204240 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.223542 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.255103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.255136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.255144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.255157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.255166 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.261288 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.304168 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.341958 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.357328 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.357372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.357383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.357400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.357414 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.382868 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.424945 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.460196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.460238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.460264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.460279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.460288 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.464959 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.514766 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.551658 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:38Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.562989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.563028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.563040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.563057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.563069 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.666030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.666106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.666119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.666140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.666154 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.768233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.768293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.768304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.768320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.768330 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.870553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.870587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.870595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.870609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.870618 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.972800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.973042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.973054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.973071 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:38 crc kubenswrapper[4760]: I1125 08:11:38.973084 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:38Z","lastTransitionTime":"2025-11-25T08:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.075653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.075688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.075696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.075710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.075719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.145586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerStarted","Data":"62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.163605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.177308 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.178425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.178461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.178471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.178485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.178495 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.195431 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.210581 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.221833 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.234093 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.255717 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.274932 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.281666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.281708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.281721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.281743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.281947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.301102 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.326998 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.349754 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.359351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.369376 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.380998 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.384142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.384189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.384204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.384220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.384231 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.391957 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:39Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.488008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.488082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.488104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.488133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.488155 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.590673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.590722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.590736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.590754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.590767 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.693157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.693216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.693233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.693278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.693295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.795327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.795374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.795383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.795400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.795410 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.898359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.898747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.898767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.898784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.898796 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:39Z","lastTransitionTime":"2025-11-25T08:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.937797 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.937848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:39 crc kubenswrapper[4760]: I1125 08:11:39.937925 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:39 crc kubenswrapper[4760]: E1125 08:11:39.938001 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:39 crc kubenswrapper[4760]: E1125 08:11:39.938072 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:39 crc kubenswrapper[4760]: E1125 08:11:39.938121 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.001515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.001558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.001568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.001583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.001593 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.103758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.103795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.103807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.103824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.103834 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.182276 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.195009 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.205765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.205802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.205810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.205824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.205834 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.208772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.225965 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.238429 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.250784 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.266289 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.282512 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.303167 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.307962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.308003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.308013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.308029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.308045 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.317903 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.330087 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.341426 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.350732 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.360285 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.373620 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.384409 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:40Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.410267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.410307 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.410319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.410336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.410346 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.512514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.512622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.512682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.512745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.512802 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.614910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.614966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.614981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.614998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.615010 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.717425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.717504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.717523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.717544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.717559 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.820108 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.820154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.820162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.820176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.820186 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.922356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.922398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.922410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.922425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:40 crc kubenswrapper[4760]: I1125 08:11:40.922435 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:40Z","lastTransitionTime":"2025-11-25T08:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.025425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.025487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.025504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.025529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.025546 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.128597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.128646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.128661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.128681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.128695 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.154508 4760 generic.go:334] "Generic (PLEG): container finished" podID="f5366e35-adc6-45e2-966c-55fc7e6c8b79" containerID="62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96" exitCode=0 Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.154572 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerDied","Data":"62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.159331 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.159577 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.173347 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.186958 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.198602 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.199611 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.213765 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.225558 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.231236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.231311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.231323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.231339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.231356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.237517 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.251772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.262669 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.276767 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.291042 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.311697 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.331995 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.333544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.333594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.333605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.333620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.333631 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.350195 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.363538 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.374054 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.386050 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.398713 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.408880 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.419158 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.431973 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.435159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.435188 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.435201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.435219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.435230 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.449542 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.462764 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.475364 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.486910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.498840 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.512058 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.534413 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.538491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.538526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.538534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.538547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.538556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.546302 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.556459 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.568684 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:41Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.641092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.641214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.641223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.641237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.641289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.714221 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.714388 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:11:57.714364138 +0000 UTC m=+51.423394943 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.714580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.714701 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.714765 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:57.714752939 +0000 UTC m=+51.423783754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.715142 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.715338 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.715399 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:57.715383887 +0000 UTC m=+51.424414692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.743820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.743850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.743861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.743878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.743890 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.816473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.816550 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.816697 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.816729 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.816745 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.816797 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:57.816779605 +0000 UTC m=+51.525810410 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.817188 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.817211 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.817221 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.817272 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:57.817241718 +0000 UTC m=+51.526272523 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.845939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.845974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.845984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.845999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.846009 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.938039 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.938187 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.938607 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.938672 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.938721 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:41 crc kubenswrapper[4760]: E1125 08:11:41.938773 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.949825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.949860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.949870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.949884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:41 crc kubenswrapper[4760]: I1125 08:11:41.949895 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:41Z","lastTransitionTime":"2025-11-25T08:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.052001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.052030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.052041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.052056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.052067 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.154038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.154066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.154074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.154089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.154100 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.165016 4760 generic.go:334] "Generic (PLEG): container finished" podID="f5366e35-adc6-45e2-966c-55fc7e6c8b79" containerID="ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f" exitCode=0 Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.165114 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.165808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerDied","Data":"ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.165841 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.180337 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.194631 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.200376 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.206618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.217421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.217490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.217508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.217533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.217544 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.224095 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: E1125 08:11:42.233572 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.234822 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.237079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.237107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.237119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.237135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.237146 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: E1125 08:11:42.248956 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.252222 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.253014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.253065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.253083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.253105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.253117 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: E1125 08:11:42.264478 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.268056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.268102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.268113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.268133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.268144 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.272471 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: E1125 08:11:42.279880 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.283408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.283443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.283452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.283467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.283478 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.284691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: E1125 08:11:42.296296 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: E1125 08:11:42.296440 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.298391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.298433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.298446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.298462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.298474 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.299042 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.313279 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.326373 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.340838 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.355784 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.370625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.384120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.395257 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.400874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.400921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.400931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.400948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.400966 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.409522 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.423158 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.437223 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.447062 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.457940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.469381 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.478948 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.492761 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.502812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.502855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.502864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.502880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.502889 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.505603 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.519481 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.530956 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.546687 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.568228 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.580636 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.591451 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:42Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.604645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.604685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.604695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.604710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.604720 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.706691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.707059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.707212 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.707376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.707493 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.810202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.810285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.810304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.810329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.810347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.912642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.912687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.912699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.912717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:42 crc kubenswrapper[4760]: I1125 08:11:42.912729 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:42Z","lastTransitionTime":"2025-11-25T08:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.014792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.014865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.014888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.014916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.014937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.117433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.117503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.117525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.117557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.117577 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.171783 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" event={"ID":"f5366e35-adc6-45e2-966c-55fc7e6c8b79","Type":"ContainerStarted","Data":"325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.188321 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.206105 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.219926 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.220019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.220058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.220074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.220094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.220108 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.232471 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.246440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.261874 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.277180 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.286926 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.299931 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.316594 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.322083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.322107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.322116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.322129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.322138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.336694 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.361331 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.374211 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.388398 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.401040 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.424987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.425026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.425034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.425048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.425058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.527601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.527652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.527660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.527687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.527697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.630218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.630312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.630330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.630351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.630368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.709274 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r"] Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.709971 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.711552 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.712147 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.720633 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.732180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.732217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.732228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.732261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.732271 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.737759 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpp4s\" (UniqueName: \"kubernetes.io/projected/38a058ae-552b-4862-a55a-2cd1c775e77a-kube-api-access-bpp4s\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.738444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38a058ae-552b-4862-a55a-2cd1c775e77a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.738489 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38a058ae-552b-4862-a55a-2cd1c775e77a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.738524 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38a058ae-552b-4862-a55a-2cd1c775e77a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.743632 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.762357 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.776453 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.787084 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.799828 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.810201 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.823374 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.833972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.834006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.834015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.834035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.834045 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.839538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38a058ae-552b-4862-a55a-2cd1c775e77a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.839608 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38a058ae-552b-4862-a55a-2cd1c775e77a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.839638 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpp4s\" (UniqueName: \"kubernetes.io/projected/38a058ae-552b-4862-a55a-2cd1c775e77a-kube-api-access-bpp4s\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.839697 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38a058ae-552b-4862-a55a-2cd1c775e77a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.840183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38a058ae-552b-4862-a55a-2cd1c775e77a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.840347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38a058ae-552b-4862-a55a-2cd1c775e77a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.842567 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.844409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38a058ae-552b-4862-a55a-2cd1c775e77a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.855007 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.855100 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpp4s\" (UniqueName: \"kubernetes.io/projected/38a058ae-552b-4862-a55a-2cd1c775e77a-kube-api-access-bpp4s\") pod \"ovnkube-control-plane-749d76644c-c8n4r\" (UID: \"38a058ae-552b-4862-a55a-2cd1c775e77a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.864772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.877137 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.886536 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.897964 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.909953 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.924199 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.935941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.935994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.936010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.936035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.936055 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:43Z","lastTransitionTime":"2025-11-25T08:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.937498 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.937534 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:43 crc kubenswrapper[4760]: E1125 08:11:43.937588 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:43 crc kubenswrapper[4760]: I1125 08:11:43.937504 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:43 crc kubenswrapper[4760]: E1125 08:11:43.937666 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:43 crc kubenswrapper[4760]: E1125 08:11:43.937733 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.028240 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.048269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.048313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.048325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.048342 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.048353 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: W1125 08:11:44.049579 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a058ae_552b_4862_a55a_2cd1c775e77a.slice/crio-03631a907d36abbee54a27a562c9712be5fd060547b6e1ddeef99eec6aa581a3 WatchSource:0}: Error finding container 03631a907d36abbee54a27a562c9712be5fd060547b6e1ddeef99eec6aa581a3: Status 404 returned error can't find the container with id 03631a907d36abbee54a27a562c9712be5fd060547b6e1ddeef99eec6aa581a3 Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.150628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.150657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.150665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.150678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.150687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.174669 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" event={"ID":"38a058ae-552b-4862-a55a-2cd1c775e77a","Type":"ContainerStarted","Data":"03631a907d36abbee54a27a562c9712be5fd060547b6e1ddeef99eec6aa581a3"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.252781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.252825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.252836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.252852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.252863 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.355278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.355326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.355340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.355357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.355367 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.459812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.459864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.459875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.459893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.459902 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.561525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.561863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.561873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.561888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.561901 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.664348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.664396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.664411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.664427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.664439 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.767357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.767397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.767406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.767419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.767427 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.870101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.870146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.870157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.870176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.870187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.972163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.972195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.972211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.972227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:44 crc kubenswrapper[4760]: I1125 08:11:44.972238 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:44Z","lastTransitionTime":"2025-11-25T08:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.074843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.074905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.074942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.074966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.074983 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.177339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.177388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.177404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.177425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.177442 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.181329 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" event={"ID":"38a058ae-552b-4862-a55a-2cd1c775e77a","Type":"ContainerStarted","Data":"d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.181368 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" event={"ID":"38a058ae-552b-4862-a55a-2cd1c775e77a","Type":"ContainerStarted","Data":"ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.183297 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/0.log" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.186757 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc" exitCode=1 Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.186788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.187412 4760 scope.go:117] "RemoveContainer" containerID="8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.200422 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.215355 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.244240 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.265378 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.279595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.279627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.279637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.279650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.279660 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.289565 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.302406 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.317803 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.330688 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.344531 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.357526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.371716 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.382195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.382229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.382237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.382272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.382283 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.385508 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.396675 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.405603 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.414513 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.425927 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.449573 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.461922 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.472904 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.484648 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.485166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.485200 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.485212 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.485230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.485242 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.496200 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.510467 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.529934 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:44Z\\\",\\\"message\\\":\\\"4.703469 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703512 6039 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703637 6039 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:11:44.703679 6039 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 08:11:44.703750 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 08:11:44.703769 6039 factory.go:656] Stopping watch factory\\\\nI1125 08:11:44.703789 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 08:11:44.703690 6039 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:11:44.704004 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 08:11:44.704032 6039 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.543970 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.559189 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.572948 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.581431 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-v2qd9"] Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.581865 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:45 crc kubenswrapper[4760]: E1125 08:11:45.581924 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.586860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.587619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.587664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.587694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.587710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.587722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.597901 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.608021 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.623378 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.636624 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.649408 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.656236 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.656370 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxr5\" (UniqueName: \"kubernetes.io/projected/deaf3f00-2bbd-4217-9414-5a6759e72b60-kube-api-access-hvxr5\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.662082 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.672950 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.684050 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.689845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.689900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.689912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.689926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.689936 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.698631 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.712526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.726198 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.737531 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.749167 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.757124 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.757164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxr5\" (UniqueName: \"kubernetes.io/projected/deaf3f00-2bbd-4217-9414-5a6759e72b60-kube-api-access-hvxr5\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:45 crc kubenswrapper[4760]: E1125 08:11:45.757317 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:45 crc kubenswrapper[4760]: E1125 08:11:45.757406 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:46.257383342 +0000 UTC m=+39.966414247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.766201 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.788619 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.789037 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxr5\" (UniqueName: \"kubernetes.io/projected/deaf3f00-2bbd-4217-9414-5a6759e72b60-kube-api-access-hvxr5\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.792186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.792214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.792224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.792238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.792271 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.804776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.815528 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.826114 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.837041 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.852811 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.871231 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:44Z\\\",\\\"message\\\":\\\"4.703469 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703512 6039 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703637 6039 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:11:44.703679 6039 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 08:11:44.703750 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 08:11:44.703769 6039 factory.go:656] Stopping watch factory\\\\nI1125 08:11:44.703789 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 08:11:44.703690 6039 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:11:44.704004 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 08:11:44.704032 6039 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.889373 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:45Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.895044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.895082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.895092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.895106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.895116 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.937520 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.937629 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:45 crc kubenswrapper[4760]: E1125 08:11:45.937688 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.937718 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:45 crc kubenswrapper[4760]: E1125 08:11:45.937865 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:45 crc kubenswrapper[4760]: E1125 08:11:45.938121 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.998146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.998187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.998195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.998210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:45 crc kubenswrapper[4760]: I1125 08:11:45.998220 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:45Z","lastTransitionTime":"2025-11-25T08:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.099989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.100047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.100059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.100072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.100100 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.191267 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/1.log" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.191799 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/0.log" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.195662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.196600 4760 scope.go:117] "RemoveContainer" containerID="81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b" Nov 25 08:11:46 crc kubenswrapper[4760]: E1125 08:11:46.196753 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.201415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.201441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.201453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.201468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.201480 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.209815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.223593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.232211 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.241943 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.252907 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.261588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:46 crc kubenswrapper[4760]: E1125 08:11:46.261726 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:46 crc kubenswrapper[4760]: E1125 08:11:46.261792 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:47.261777077 +0000 UTC m=+40.970807862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.268169 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.278637 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.289678 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.301672 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.303192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.303330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.303455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.303558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.303643 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.312400 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.330079 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.341461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.352443 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.367290 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.379028 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.394638 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.406055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.406090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.406101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.406116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.406127 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.411023 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:44Z\\\",\\\"message\\\":\\\"4.703469 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703512 6039 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703637 6039 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:11:44.703679 6039 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 08:11:44.703750 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 08:11:44.703769 6039 factory.go:656] Stopping watch factory\\\\nI1125 08:11:44.703789 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 08:11:44.703690 6039 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:11:44.704004 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 08:11:44.704032 6039 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:46Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 08:11:46.168392 6282 services_controller.go:452] Built service openshift-marketplace/redhat-operators per-node LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168401 6282 services_controller.go:453] Built service openshift-marketplace/redhat-operators template LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168407 6282 services_controller.go:454] Service openshift-marketplace/redhat-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1125 08:11:46.168174 6282 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-nlwcx in node crc\\\\nF1125 08:11:46.168432 6282 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.508401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.508439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.508469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.508484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.508517 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.610876 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.611546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.611574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.611595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.611607 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.714478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.714526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.714536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.714550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.714559 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.816330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.816363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.816371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.816386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.816397 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.919427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.919476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.919489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.919509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.919521 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:46Z","lastTransitionTime":"2025-11-25T08:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.937807 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:46 crc kubenswrapper[4760]: E1125 08:11:46.937965 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.951698 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.964369 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.977020 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:46 crc kubenswrapper[4760]: I1125 08:11:46.990231 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.005199 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.021446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.021509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.021522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.021539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.021551 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.026910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:44Z\\\",\\\"message\\\":\\\"4.703469 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703512 6039 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:11:44.703637 6039 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:11:44.703679 6039 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 08:11:44.703750 6039 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 08:11:44.703769 6039 factory.go:656] Stopping watch factory\\\\nI1125 08:11:44.703789 6039 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 08:11:44.703690 6039 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:11:44.704004 6039 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 08:11:44.704032 6039 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:46Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 08:11:46.168392 6282 services_controller.go:452] Built service openshift-marketplace/redhat-operators per-node LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168401 6282 services_controller.go:453] Built service openshift-marketplace/redhat-operators template LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168407 6282 services_controller.go:454] Service openshift-marketplace/redhat-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1125 08:11:46.168174 6282 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-nlwcx in node crc\\\\nF1125 08:11:46.168432 6282 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.048237 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.059286 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.069448 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.080635 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.092554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.107025 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.117178 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.124037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.124079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.124088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.124102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.124112 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.128359 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.143271 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.155572 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.165408 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.199400 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/1.log" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.199989 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/0.log" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.202200 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b" exitCode=1 Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.202277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.202333 4760 scope.go:117] "RemoveContainer" containerID="8298324fddc626dadd8b1c467b4e4bce254a6a78351a32a0e0e09c592718c2cc" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.203094 4760 scope.go:117] "RemoveContainer" containerID="81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b" Nov 25 08:11:47 crc kubenswrapper[4760]: E1125 08:11:47.203288 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.219781 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.226327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.226360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.226368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.226381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.226390 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.231438 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.243350 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.257397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.271592 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:47 crc kubenswrapper[4760]: E1125 08:11:47.271926 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:47 crc kubenswrapper[4760]: E1125 08:11:47.272020 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:49.271998108 +0000 UTC m=+42.981028963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.275816 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:46Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 08:11:46.168392 6282 services_controller.go:452] Built service openshift-marketplace/redhat-operators per-node LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168401 6282 services_controller.go:453] Built service openshift-marketplace/redhat-operators template LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168407 6282 services_controller.go:454] Service openshift-marketplace/redhat-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1125 08:11:46.168174 6282 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-nlwcx in node crc\\\\nF1125 08:11:46.168432 6282 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.293412 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.306377 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.315749 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.325641 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.328226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.328279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.328293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.328308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.328317 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.337440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.345541 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.357291 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.367136 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.383154 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.396617 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.424144 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.430955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.430999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.431011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.431036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.431047 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.461222 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.533349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.533615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.533723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.533810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.533905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.637355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.637643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.637719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.637784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.637844 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.740789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.740844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.740858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.740877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.740889 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.843641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.843676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.843685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.843698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.843709 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.937939 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.938045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:47 crc kubenswrapper[4760]: E1125 08:11:47.938098 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.937941 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:47 crc kubenswrapper[4760]: E1125 08:11:47.938200 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:47 crc kubenswrapper[4760]: E1125 08:11:47.938244 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.945901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.945938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.945950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.945965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:47 crc kubenswrapper[4760]: I1125 08:11:47.945977 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:47Z","lastTransitionTime":"2025-11-25T08:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.047742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.047785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.047797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.047812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.047823 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.150106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.150137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.150148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.150162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.150171 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.207056 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/1.log" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.252356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.252397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.252421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.252440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.252455 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.356277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.356359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.356376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.356396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.356410 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.459682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.459758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.459804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.459836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.459857 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.562386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.562435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.562450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.562468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.562479 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.665182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.665244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.665289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.665310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.665325 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.768576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.768644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.768661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.768688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.768708 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.872061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.872311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.872322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.872341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.872352 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.938106 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:48 crc kubenswrapper[4760]: E1125 08:11:48.938399 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.974725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.974760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.974771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.974789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:48 crc kubenswrapper[4760]: I1125 08:11:48.974801 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:48Z","lastTransitionTime":"2025-11-25T08:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.077550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.078134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.078174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.078199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.078214 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.180928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.180961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.180969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.180983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.180992 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.283559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.283657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.283690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.283724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.283746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.294171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:49 crc kubenswrapper[4760]: E1125 08:11:49.294370 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:49 crc kubenswrapper[4760]: E1125 08:11:49.294471 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:11:53.294443637 +0000 UTC m=+47.003474462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.385937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.386009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.386025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.386045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.386059 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.488576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.488616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.488627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.488642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.488652 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.591222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.591302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.591330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.591356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.591372 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.696211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.696276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.696290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.696309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.696327 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.799052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.799086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.799099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.799118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.799132 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.900955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.900992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.901003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.901018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.901030 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:49Z","lastTransitionTime":"2025-11-25T08:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.937886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.937886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:49 crc kubenswrapper[4760]: I1125 08:11:49.937918 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:49 crc kubenswrapper[4760]: E1125 08:11:49.938012 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:49 crc kubenswrapper[4760]: E1125 08:11:49.938159 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:49 crc kubenswrapper[4760]: E1125 08:11:49.938362 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.003453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.003698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.003782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.003885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.003967 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.106344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.106559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.106631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.106709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.106803 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.209445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.209488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.209500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.209515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.209556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.312456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.312513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.312523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.312538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.312547 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.415288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.415330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.415341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.415357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.415369 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.518404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.518453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.518465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.518483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.518525 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.621161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.621341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.621369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.621400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.621425 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.723863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.723954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.723972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.723998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.724010 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.826356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.826393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.826404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.826419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.826431 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.928527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.928845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.928966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.929067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.929178 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:50Z","lastTransitionTime":"2025-11-25T08:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:50 crc kubenswrapper[4760]: I1125 08:11:50.937953 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:50 crc kubenswrapper[4760]: E1125 08:11:50.938383 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.032075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.032663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.032684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.032708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.032719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.135470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.135538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.135550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.135575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.135591 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.238407 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.238462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.238480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.238521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.238539 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.341137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.341162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.341170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.341182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.341192 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.443822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.444095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.444175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.444273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.444348 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.547033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.547072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.547084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.547099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.547112 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.649856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.650193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.650438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.650663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.650890 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.753638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.753911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.754028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.754199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.754353 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.857137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.857196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.857212 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.857231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.857263 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.937624 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.937705 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.937633 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:51 crc kubenswrapper[4760]: E1125 08:11:51.937754 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:51 crc kubenswrapper[4760]: E1125 08:11:51.937860 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:51 crc kubenswrapper[4760]: E1125 08:11:51.937969 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.959647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.959686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.959695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.959709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:51 crc kubenswrapper[4760]: I1125 08:11:51.959719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:51Z","lastTransitionTime":"2025-11-25T08:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.061901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.061943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.061955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.061970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.061980 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.164551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.164599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.164615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.164635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.164651 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.266742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.266776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.266784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.266798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.266807 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.369403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.369440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.369451 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.369467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.369476 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.448550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.448592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.448602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.448619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.448630 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: E1125 08:11:52.466450 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:52Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.471283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.471336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.471348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.471370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.471385 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: E1125 08:11:52.483858 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:52Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.487279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.487333 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.487352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.487371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.487385 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: E1125 08:11:52.498893 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:52Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.502968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.502998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.503009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.503024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.503035 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: E1125 08:11:52.516934 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:52Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.521830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.521919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.521937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.521963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.522003 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: E1125 08:11:52.536374 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:52Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:52 crc kubenswrapper[4760]: E1125 08:11:52.536543 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.538925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.538971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.538979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.538995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.539007 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.641812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.641847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.641855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.641868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.641877 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.743674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.743744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.743769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.743791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.743805 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.846504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.846572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.846584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.846598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.846609 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.938041 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:52 crc kubenswrapper[4760]: E1125 08:11:52.938209 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.948488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.948540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.948555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.948571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:52 crc kubenswrapper[4760]: I1125 08:11:52.948582 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:52Z","lastTransitionTime":"2025-11-25T08:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.051217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.051271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.051282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.051298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.051308 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.153923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.153964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.153974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.153990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.154002 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.257188 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.257276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.257299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.257321 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.257336 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.333236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:53 crc kubenswrapper[4760]: E1125 08:11:53.333484 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:53 crc kubenswrapper[4760]: E1125 08:11:53.333617 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:12:01.333585192 +0000 UTC m=+55.042615987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.360017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.360062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.360071 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.360087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.360096 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.463062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.463112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.463129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.463149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.463163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.565506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.565550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.565561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.565576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.565588 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.668030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.668070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.668081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.668096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.668106 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.770976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.771053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.771068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.771097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.771120 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.873437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.873485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.873494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.873509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.873518 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.938233 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.938346 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.938357 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:53 crc kubenswrapper[4760]: E1125 08:11:53.938468 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:53 crc kubenswrapper[4760]: E1125 08:11:53.938632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:53 crc kubenswrapper[4760]: E1125 08:11:53.938761 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.976148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.976190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.976199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.976213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:53 crc kubenswrapper[4760]: I1125 08:11:53.976223 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:53Z","lastTransitionTime":"2025-11-25T08:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.078808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.078851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.078861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.078876 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.078886 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.182925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.183014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.183029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.183057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.183073 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.285397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.285463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.285475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.285520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.285533 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.388335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.388400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.388413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.388431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.388445 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.490677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.490720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.490731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.490750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.490760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.593079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.593130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.593146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.593164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.593177 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.696125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.696166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.696177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.696190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.696200 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.798063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.798349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.798363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.798377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.798389 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.850177 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.870382 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.879022 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.900865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.900906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.900917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.900934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.900948 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:54Z","lastTransitionTime":"2025-11-25T08:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.903311 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.915113 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.926554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.937656 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:54 crc kubenswrapper[4760]: E1125 08:11:54.937802 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.939516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.949966 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.960675 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.971617 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.985136 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:54 crc kubenswrapper[4760]: I1125 08:11:54.996907 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.002592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.002621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.002632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.002647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.002658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.008025 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:55Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.020469 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:55Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.036965 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:46Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 08:11:46.168392 6282 services_controller.go:452] Built service openshift-marketplace/redhat-operators per-node LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168401 6282 services_controller.go:453] Built service openshift-marketplace/redhat-operators template LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168407 6282 services_controller.go:454] Service openshift-marketplace/redhat-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1125 08:11:46.168174 6282 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-nlwcx in node crc\\\\nF1125 08:11:46.168432 6282 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:55Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.053141 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:55Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.065001 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:55Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.074582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:55Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.085495 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:55Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.105098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.105140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.105157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.105173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.105183 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.207179 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.207215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.207226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.207240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.207271 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.309560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.309607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.309620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.309638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.309650 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.411937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.412330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.412472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.412633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.412752 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.515101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.515163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.515171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.515184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.515193 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.617280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.617316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.617324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.617337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.617345 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.719535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.719775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.719788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.719809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.719821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.821892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.821956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.821967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.821982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.821994 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.924430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.924490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.924501 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.924518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.924530 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:55Z","lastTransitionTime":"2025-11-25T08:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.937806 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.937806 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:55 crc kubenswrapper[4760]: I1125 08:11:55.937898 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:55 crc kubenswrapper[4760]: E1125 08:11:55.938052 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:55 crc kubenswrapper[4760]: E1125 08:11:55.938151 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:55 crc kubenswrapper[4760]: E1125 08:11:55.938308 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.026355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.026407 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.026420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.026438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.026452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.128306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.128350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.128360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.128373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.128382 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.231314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.231353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.231581 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.231612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.231625 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.333775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.333812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.333820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.333834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.333842 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.436376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.436413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.436421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.436434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.436443 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.539936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.539981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.539992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.540009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.540020 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.642467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.642527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.642544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.642564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.642582 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.744838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.744892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.744907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.744928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.744941 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.847372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.847433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.847457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.847489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.847514 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.937555 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:56 crc kubenswrapper[4760]: E1125 08:11:56.937672 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.949331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.949571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.949713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.949809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.949888 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:56Z","lastTransitionTime":"2025-11-25T08:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.951104 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:56Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.961644 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:56Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.970766 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:56Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.979344 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:56Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:56 crc kubenswrapper[4760]: I1125 08:11:56.992691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:56Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.005363 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.022611 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.033872 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.045294 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.051532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.051574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.051585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.051601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.051612 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.056853 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.066901 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.086740 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.100026 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.111099 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.123929 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.134600 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.150237 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.153336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.153382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.153394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.153408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.153417 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.168648 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:46Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 08:11:46.168392 6282 services_controller.go:452] Built service openshift-marketplace/redhat-operators per-node LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168401 6282 services_controller.go:453] Built service openshift-marketplace/redhat-operators template LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168407 6282 services_controller.go:454] Service openshift-marketplace/redhat-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1125 08:11:46.168174 6282 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-nlwcx in node crc\\\\nF1125 08:11:46.168432 6282 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:11:57Z is after 2025-08-24T17:21:41Z" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.255902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.255944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.255954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.255970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.255981 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.358844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.358875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.358885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.358901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.358913 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.461277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.461319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.461329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.461344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.461376 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.563437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.563475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.563483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.563497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.563506 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.666583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.666681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.666907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.667141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.667178 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.769580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.769657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.769675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.769698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.769716 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.775982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.776120 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:12:29.776092226 +0000 UTC m=+83.485123031 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.776176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.776313 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.776316 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.776384 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.776437 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:12:29.776426955 +0000 UTC m=+83.485457770 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.776579 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:12:29.776545488 +0000 UTC m=+83.485576333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.872209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.872237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.872276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.872294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.872303 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.877070 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.877125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877220 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877241 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877283 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877324 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:12:29.877310608 +0000 UTC m=+83.586341403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877673 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877794 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877866 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.877984 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:12:29.877965917 +0000 UTC m=+83.586996712 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.937851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.937960 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.937869 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.937851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.938019 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:57 crc kubenswrapper[4760]: E1125 08:11:57.938140 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.974926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.975145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.975219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.975322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:57 crc kubenswrapper[4760]: I1125 08:11:57.975425 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:57Z","lastTransitionTime":"2025-11-25T08:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.078113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.078168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.078179 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.078198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.078207 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.180315 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.180392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.180404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.180417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.180426 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.282773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.282948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.282969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.282991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.283006 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.385496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.385536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.385546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.385561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.385572 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.488170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.488635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.488845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.489125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.489356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.591613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.592088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.592151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.592265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.592431 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.695052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.695105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.695119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.695137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.695149 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.797320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.797591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.797748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.797870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.797982 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.901209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.901661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.901873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.902057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.902281 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:58Z","lastTransitionTime":"2025-11-25T08:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:58 crc kubenswrapper[4760]: I1125 08:11:58.937990 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:11:58 crc kubenswrapper[4760]: E1125 08:11:58.938159 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.005027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.005371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.005489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.005614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.005737 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.108488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.108530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.108544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.108574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.108590 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.211317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.211353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.211363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.211380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.211391 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.313879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.313915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.313924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.313937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.313947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.416772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.416867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.416883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.416901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.416912 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.519836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.520377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.520532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.520624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.520721 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.623920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.623968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.623986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.624005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.624019 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.726475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.726540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.726553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.726576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.726592 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.829625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.829669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.829678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.829696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.829707 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.932072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.932123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.932135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.932154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.932170 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:11:59Z","lastTransitionTime":"2025-11-25T08:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.937658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.937687 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:11:59 crc kubenswrapper[4760]: I1125 08:11:59.937654 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:11:59 crc kubenswrapper[4760]: E1125 08:11:59.937797 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:11:59 crc kubenswrapper[4760]: E1125 08:11:59.937974 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:11:59 crc kubenswrapper[4760]: E1125 08:11:59.938096 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.035624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.035690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.035700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.035717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.035732 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.138549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.138603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.138621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.138643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.138659 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.241573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.241613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.241626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.241644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.241656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.343850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.343925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.343948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.343979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.344000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.446349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.446382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.446392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.446405 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.446413 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.549621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.549677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.549688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.549704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.549717 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.652616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.652671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.652687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.652707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.652719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.755019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.755065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.755076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.755093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.755105 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.857786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.857829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.857840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.857856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.857867 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.937856 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:00 crc kubenswrapper[4760]: E1125 08:12:00.938041 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.960147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.960195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.960208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.960225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:00 crc kubenswrapper[4760]: I1125 08:12:00.960237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:00Z","lastTransitionTime":"2025-11-25T08:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.062756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.062797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.062814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.062832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.062841 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.165047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.165086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.165098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.165112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.165125 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.267328 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.267385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.267396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.267410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.267419 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.369767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.369807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.369819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.369837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.369847 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.415535 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:01 crc kubenswrapper[4760]: E1125 08:12:01.415705 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:12:01 crc kubenswrapper[4760]: E1125 08:12:01.415760 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:12:17.415744842 +0000 UTC m=+71.124775627 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.471878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.471917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.471927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.471943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.471956 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.574695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.574743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.574755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.574772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.574784 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.678340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.678414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.678450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.678478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.678497 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.781350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.781406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.781423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.781444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.781458 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.886054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.886141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.886162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.886229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.886315 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.937606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.937714 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.937611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:01 crc kubenswrapper[4760]: E1125 08:12:01.937748 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:01 crc kubenswrapper[4760]: E1125 08:12:01.937861 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:01 crc kubenswrapper[4760]: E1125 08:12:01.938059 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.988798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.988846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.988857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.988873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:01 crc kubenswrapper[4760]: I1125 08:12:01.988884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:01Z","lastTransitionTime":"2025-11-25T08:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.091693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.091751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.091765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.091787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.091805 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.194420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.194491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.194506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.194524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.194536 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.296781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.296814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.296822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.296836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.296844 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.399553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.399599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.399610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.399626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.399638 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.502770 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.502830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.502846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.502871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.502884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.604888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.604955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.604976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.605005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.605028 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.688185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.688226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.688242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.688275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.688288 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: E1125 08:12:02.699072 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:02Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.702371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.702397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.702405 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.702417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.702428 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: E1125 08:12:02.713530 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:02Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.716947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.716988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.716998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.717011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.717022 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: E1125 08:12:02.726880 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:02Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.729962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.730009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.730020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.730038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.730056 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: E1125 08:12:02.742611 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:02Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.745330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.745389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.745400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.745418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.745432 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: E1125 08:12:02.756880 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:02Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:02 crc kubenswrapper[4760]: E1125 08:12:02.757058 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.758601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.758648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.758664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.758678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.758692 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.861204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.861278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.861293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.861308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.861321 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.938414 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:02 crc kubenswrapper[4760]: E1125 08:12:02.938589 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.939413 4760 scope.go:117] "RemoveContainer" containerID="81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.964023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.964052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.964060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.964074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:02 crc kubenswrapper[4760]: I1125 08:12:02.964083 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:02Z","lastTransitionTime":"2025-11-25T08:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.066284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.066337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.066346 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.066360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.066372 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.168531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.168570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.168578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.168591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.168601 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.255637 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/1.log" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.258092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.258524 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.270906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.270945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.270957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.270972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.270984 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.273369 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.283467 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.295525 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.311092 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.331024 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:46Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 08:11:46.168392 6282 services_controller.go:452] Built service openshift-marketplace/redhat-operators per-node LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168401 6282 services_controller.go:453] Built service openshift-marketplace/redhat-operators template LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168407 6282 services_controller.go:454] Service openshift-marketplace/redhat-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1125 08:11:46.168174 6282 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-nlwcx in node crc\\\\nF1125 08:11:46.168432 6282 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.352261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.372746 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.373730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.373768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.373779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.373794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.373809 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.389860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.404016 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.418996 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.445723 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.472201 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.475538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.475575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.475586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.475604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.475617 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.484415 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.497809 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.510321 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.521612 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.531974 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.540987 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:03Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.577719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.577752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.577763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.577777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.577786 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.679967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.680005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.680014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.680028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.680038 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.782334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.782395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.782406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.782423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.782434 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.884827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.884871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.884880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.884894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.884905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.937581 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.937616 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:03 crc kubenswrapper[4760]: E1125 08:12:03.937696 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.937805 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:03 crc kubenswrapper[4760]: E1125 08:12:03.937952 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:03 crc kubenswrapper[4760]: E1125 08:12:03.938089 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.987024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.987074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.987088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.987105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:03 crc kubenswrapper[4760]: I1125 08:12:03.987118 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:03Z","lastTransitionTime":"2025-11-25T08:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.089802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.089855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.089866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.089881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.089892 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.191873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.191915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.191924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.191941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.191951 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.262637 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/2.log" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.263232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/1.log" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.266070 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27" exitCode=1 Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.266122 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.266189 4760 scope.go:117] "RemoveContainer" containerID="81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.266809 4760 scope.go:117] "RemoveContainer" containerID="372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27" Nov 25 08:12:04 crc kubenswrapper[4760]: E1125 08:12:04.267088 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.288857 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81a3d6f360d46d42e7b9704a179884440e30587ff172d2576551185f78572f4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:11:46Z\\\",\\\"message\\\":\\\"\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 08:11:46.168392 6282 services_controller.go:452] Built service openshift-marketplace/redhat-operators per-node LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168401 6282 services_controller.go:453] Built service openshift-marketplace/redhat-operators template LB for network=default: []services.LB{}\\\\nI1125 08:11:46.168407 6282 services_controller.go:454] Service openshift-marketplace/redhat-operators for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1125 08:11:46.168174 6282 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-nlwcx in node crc\\\\nF1125 08:11:46.168432 6282 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.294113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.294152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.294162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.294181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.294193 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.307922 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.322001 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.336882 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.352296 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.362654 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.383611 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.396044 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.396287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.396358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.396383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.396412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.396433 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.413261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.426773 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.436755 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.448307 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.462300 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.473885 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.487426 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.498427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.498473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.498485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.498501 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.498513 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.498616 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.511406 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.520394 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:04Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.600339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.600370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.600379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.600393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.600404 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.702995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.703040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.703048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.703062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.703072 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.804935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.805591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.805683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.805784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.805854 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.908341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.908383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.908404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.908423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.908436 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:04Z","lastTransitionTime":"2025-11-25T08:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:04 crc kubenswrapper[4760]: I1125 08:12:04.938208 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:04 crc kubenswrapper[4760]: E1125 08:12:04.938359 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.010623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.010666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.010678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.010694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.010705 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.113104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.113146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.113160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.113176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.113186 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.215487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.215526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.215536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.215552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.215563 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.270200 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/2.log" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.275151 4760 scope.go:117] "RemoveContainer" containerID="372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27" Nov 25 08:12:05 crc kubenswrapper[4760]: E1125 08:12:05.275506 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.287189 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.300120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.313011 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.317522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.317574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.317583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.317618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.317639 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.327020 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.337037 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.348814 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.359813 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.372711 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.387040 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.401372 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.410948 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.420073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.420111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.420121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.420134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.420143 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.425332 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.434917 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.446728 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.458101 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.470616 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.486972 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.509019 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:05Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.522401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.522435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.522444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.522456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.522465 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.624863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.625160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.625324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.625450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.625556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.728339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.728380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.728391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.728406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.728417 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.830081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.830114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.830131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.830146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.830155 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.932894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.932935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.932983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.933006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.933022 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:05Z","lastTransitionTime":"2025-11-25T08:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.938127 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.938160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:05 crc kubenswrapper[4760]: I1125 08:12:05.938178 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:05 crc kubenswrapper[4760]: E1125 08:12:05.938212 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:05 crc kubenswrapper[4760]: E1125 08:12:05.938356 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:05 crc kubenswrapper[4760]: E1125 08:12:05.938398 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.035680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.035729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.035741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.035760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.035773 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.138740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.138780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.138792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.138809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.138821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.241639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.241722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.241755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.241779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.241794 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.343616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.343662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.343670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.343687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.343699 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.445611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.445641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.445649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.445661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.445673 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.547783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.547827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.547838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.547854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.547865 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.650430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.650503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.650521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.650542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.650559 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.753108 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.753156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.753170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.753186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.753199 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.856086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.856424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.856436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.856453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.856463 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.938173 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:06 crc kubenswrapper[4760]: E1125 08:12:06.938465 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.949666 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:06Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.959002 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.959039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.959047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.959060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.959069 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:06Z","lastTransitionTime":"2025-11-25T08:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.959461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:06Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.974340 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:06Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.988275 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:06Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:06 crc kubenswrapper[4760]: I1125 08:12:06.999020 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:06Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.013840 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.028062 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.046733 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.060503 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.060741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.061125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.061142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.061161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.061173 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.073385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.085830 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.097053 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.114263 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.131919 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.143551 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.156368 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.163982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.164214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.164326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.164394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.164452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.166833 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.178714 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:07Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.267531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.267569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.267578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.267592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.267601 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.370019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.370052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.370060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.370074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.370084 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.472919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.472955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.472964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.472978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.472988 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.575324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.575360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.575369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.575382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.575392 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.677791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.677861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.677881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.677898 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.677910 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.780330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.780370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.780381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.780401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.780412 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.883113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.883174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.883187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.883202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.883213 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.938310 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.938349 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.938415 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:07 crc kubenswrapper[4760]: E1125 08:12:07.938510 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:07 crc kubenswrapper[4760]: E1125 08:12:07.938624 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:07 crc kubenswrapper[4760]: E1125 08:12:07.938689 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.985129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.985174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.985185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.985199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:07 crc kubenswrapper[4760]: I1125 08:12:07.985210 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:07Z","lastTransitionTime":"2025-11-25T08:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.087236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.087290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.087302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.087317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.087328 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.189720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.189779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.189791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.189809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.189821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.292204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.292280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.292297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.292321 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.292336 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.394918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.394964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.394975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.394994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.395008 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.497272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.497303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.497312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.497326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.497335 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.599081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.599120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.599129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.599141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.599150 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.701406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.701442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.701454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.701468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.701476 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.805153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.805221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.805235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.805279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.805295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.908363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.908446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.908470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.908499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.908520 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:08Z","lastTransitionTime":"2025-11-25T08:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:08 crc kubenswrapper[4760]: I1125 08:12:08.937577 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:08 crc kubenswrapper[4760]: E1125 08:12:08.937811 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.010910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.010947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.010956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.010969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.010978 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.113012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.113050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.113058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.113072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.113081 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.215671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.215740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.215751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.215765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.215775 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.318238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.318303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.318319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.318338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.318355 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.420991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.421030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.421040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.421058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.421069 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.524348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.524394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.524410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.524432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.524451 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.626665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.626706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.626719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.626739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.626754 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.732433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.732467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.732478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.732492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.732501 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.834354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.834381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.834389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.834401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.834409 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937603 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:09 crc kubenswrapper[4760]: E1125 08:12:09.937709 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937757 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:09 crc kubenswrapper[4760]: E1125 08:12:09.937857 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:09 crc kubenswrapper[4760]: I1125 08:12:09.937956 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:09Z","lastTransitionTime":"2025-11-25T08:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:09 crc kubenswrapper[4760]: E1125 08:12:09.938100 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.041353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.041389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.041398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.041411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.041420 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.143841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.143984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.144008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.144026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.144038 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.246765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.246825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.246838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.246859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.246879 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.349762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.349821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.349832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.349848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.349858 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.452191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.452261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.452276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.452294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.452328 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.554266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.554311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.554324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.554343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.554355 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.656038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.656076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.656087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.656102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.656113 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.758601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.758645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.758659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.758676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.758688 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.860950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.861014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.861026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.861042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.861055 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.938377 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:10 crc kubenswrapper[4760]: E1125 08:12:10.938520 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.963538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.963578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.963589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.963604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:10 crc kubenswrapper[4760]: I1125 08:12:10.963616 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:10Z","lastTransitionTime":"2025-11-25T08:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.065864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.065909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.065919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.065935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.065943 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.168375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.168409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.168420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.168436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.168446 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.270319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.270354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.270364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.270379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.270389 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.372541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.372574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.372584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.372597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.372605 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.474959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.474996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.475007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.475023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.475034 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.577799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.577855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.577867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.577886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.577898 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.680352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.680393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.680402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.680416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.680426 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.783058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.783139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.783149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.783168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.783181 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.885942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.885972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.885981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.885996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.886005 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.937618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.937712 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:11 crc kubenswrapper[4760]: E1125 08:12:11.937768 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:11 crc kubenswrapper[4760]: E1125 08:12:11.937846 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.938055 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:11 crc kubenswrapper[4760]: E1125 08:12:11.938147 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.988766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.988822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.988833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.988858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:11 crc kubenswrapper[4760]: I1125 08:12:11.988871 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:11Z","lastTransitionTime":"2025-11-25T08:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.090968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.091017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.091028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.091044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.091055 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.197058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.197199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.197271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.197344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.197365 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.299312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.299355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.299366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.299381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.299392 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.401307 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.401363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.401375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.401391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.401401 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.503811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.503865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.503874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.503890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.503901 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.605892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.606166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.606173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.606187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.606196 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.708371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.708416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.708428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.708443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.708455 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.810350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.810386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.810397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.810412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.810423 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.913497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.913542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.913557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.913574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.913586 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:12Z","lastTransitionTime":"2025-11-25T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:12 crc kubenswrapper[4760]: I1125 08:12:12.938412 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:12 crc kubenswrapper[4760]: E1125 08:12:12.938539 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.015835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.015897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.015915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.015938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.015955 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.086308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.086348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.086359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.086375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.086387 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.097859 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:13Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.101151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.101186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.101196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.101211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.101230 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.114073 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:13Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.117649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.117682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.117694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.117710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.117720 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.129945 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:13Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.133980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.134012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.134020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.134034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.134042 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.146435 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:13Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.150068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.150101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.150113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.150131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.150142 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.162944 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:13Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.163115 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.164652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.164709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.164724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.164742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.164753 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.270601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.270664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.270681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.270707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.270725 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.373404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.373449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.373461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.373478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.373491 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.475919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.475960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.475973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.476026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.476039 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.578106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.578143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.578153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.578168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.578177 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.680610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.680653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.680662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.680676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.680687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.783172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.783224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.783236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.783266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.783278 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.885568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.885654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.885667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.885713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.885726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.938095 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.938162 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.938235 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.938162 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.938323 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:13 crc kubenswrapper[4760]: E1125 08:12:13.938370 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.988081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.988128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.988140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.988159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:13 crc kubenswrapper[4760]: I1125 08:12:13.988170 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:13Z","lastTransitionTime":"2025-11-25T08:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.090421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.090462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.090473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.090489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.090501 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.192150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.192189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.192197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.192211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.192224 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.308969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.309026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.309043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.309064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.309081 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.411356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.411387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.411396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.411411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.411422 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.514267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.514322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.514350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.514369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.514383 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.617836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.617883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.617892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.617910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.617920 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.720707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.720793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.720807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.720847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.720861 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.823978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.824019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.824028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.824044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.824054 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.926450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.926515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.926527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.926542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.926552 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:14Z","lastTransitionTime":"2025-11-25T08:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:14 crc kubenswrapper[4760]: I1125 08:12:14.937833 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:14 crc kubenswrapper[4760]: E1125 08:12:14.937967 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.029954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.030012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.030024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.030043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.030057 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.134297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.134341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.134350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.134366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.134375 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.236292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.236324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.236331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.236345 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.236354 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.338213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.338260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.338269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.338281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.338291 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.440833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.440897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.440909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.440927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.440941 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.544570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.544614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.544625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.544643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.544658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.647436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.647502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.647513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.647531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.647544 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.750558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.750595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.750602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.750616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.750625 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.853359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.853391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.853400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.853414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.853424 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.937660 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.938029 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.938112 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:15 crc kubenswrapper[4760]: E1125 08:12:15.938241 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:15 crc kubenswrapper[4760]: E1125 08:12:15.938371 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:15 crc kubenswrapper[4760]: E1125 08:12:15.938415 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.938516 4760 scope.go:117] "RemoveContainer" containerID="372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27" Nov 25 08:12:15 crc kubenswrapper[4760]: E1125 08:12:15.938683 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.956697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.956742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.956751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.956767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:15 crc kubenswrapper[4760]: I1125 08:12:15.956776 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:15Z","lastTransitionTime":"2025-11-25T08:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.059589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.059652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.059660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.059676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.059687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.162660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.162712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.162727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.162752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.162771 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.264891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.264933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.264942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.264956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.264965 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.366879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.366917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.366934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.366950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.366960 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.469179 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.469216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.469224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.469237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.469260 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.571639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.571695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.571708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.571728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.571741 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.674043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.674086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.674094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.674107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.674117 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.776878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.776917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.776926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.776940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.776951 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.880537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.880596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.880612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.880639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.880654 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.937650 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:16 crc kubenswrapper[4760]: E1125 08:12:16.937790 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.949131 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:16Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.959913 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:16Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.971535 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:16Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.983119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.983051 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:16Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.983183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.983311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.983332 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.983342 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:16Z","lastTransitionTime":"2025-11-25T08:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:16 crc kubenswrapper[4760]: I1125 08:12:16.995618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:16Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.007169 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.019076 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.033570 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.048839 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.059931 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.071550 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.083626 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.085350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.085385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.085395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.085408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.085417 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.100098 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.116587 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.125783 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.135514 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.149640 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.162720 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:17Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.187637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.187665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.187676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.187690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.187700 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.290286 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.290344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.290357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.290373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.290740 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.392400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.392441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.392764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.392795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.392808 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.475907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:17 crc kubenswrapper[4760]: E1125 08:12:17.476094 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:12:17 crc kubenswrapper[4760]: E1125 08:12:17.476149 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:12:49.476131484 +0000 UTC m=+103.185162279 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.494828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.494869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.494884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.494900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.494911 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.597676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.597722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.597735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.597751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.597762 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.699954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.699994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.700003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.700020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.700031 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.802915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.802963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.802974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.802993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.803004 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.905628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.905684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.905694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.905708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.905717 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:17Z","lastTransitionTime":"2025-11-25T08:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.938233 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.938327 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:17 crc kubenswrapper[4760]: I1125 08:12:17.938341 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:17 crc kubenswrapper[4760]: E1125 08:12:17.938437 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:17 crc kubenswrapper[4760]: E1125 08:12:17.938524 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:17 crc kubenswrapper[4760]: E1125 08:12:17.938554 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.007521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.007554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.007565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.007582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.007596 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.109532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.109784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.109873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.109973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.110041 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.213084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.213124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.213132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.213148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.213156 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.315881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.315937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.315950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.315967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.315979 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.419335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.419374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.419382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.419399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.419408 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.522151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.522208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.522218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.522233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.522250 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.626969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.627072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.627091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.627447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.627688 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.730576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.730639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.730652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.730672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.730687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.834313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.834382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.834396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.834421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.834434 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.936859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.936902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.936913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.936947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.936960 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:18Z","lastTransitionTime":"2025-11-25T08:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:18 crc kubenswrapper[4760]: I1125 08:12:18.937593 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:18 crc kubenswrapper[4760]: E1125 08:12:18.937692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.039585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.039617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.039633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.039648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.039668 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.141884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.141935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.141948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.141964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.141975 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.244790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.244830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.244840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.244854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.244863 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.346598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.346644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.346652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.346667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.346676 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.450228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.450301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.450317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.450337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.450352 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.552722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.552780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.552790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.552809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.552821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.654809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.654845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.654873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.654887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.654896 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.757434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.757519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.757531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.757550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.757563 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.859986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.860044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.860054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.860073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.860084 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.938157 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.938205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.938241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:19 crc kubenswrapper[4760]: E1125 08:12:19.938307 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:19 crc kubenswrapper[4760]: E1125 08:12:19.938401 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:19 crc kubenswrapper[4760]: E1125 08:12:19.938570 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.962122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.962166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.962177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.962194 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:19 crc kubenswrapper[4760]: I1125 08:12:19.962207 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:19Z","lastTransitionTime":"2025-11-25T08:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.064769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.064820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.064831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.064848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.064859 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.167484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.167524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.167536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.167553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.167565 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.270310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.270362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.270371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.270388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.270398 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.373137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.373166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.373210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.373441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.373457 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.476008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.476061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.476073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.476093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.476106 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.578417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.578459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.578468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.578482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.578492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.681452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.681500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.681512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.681532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.681545 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.783980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.784015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.784025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.784040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.784050 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.886210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.886482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.886506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.886528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.886543 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.937834 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:20 crc kubenswrapper[4760]: E1125 08:12:20.937980 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.988628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.988675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.988684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.988699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:20 crc kubenswrapper[4760]: I1125 08:12:20.988713 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:20Z","lastTransitionTime":"2025-11-25T08:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.090799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.090851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.090865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.090887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.090902 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.193197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.193241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.193268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.193285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.193295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.295658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.296441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.296475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.296496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.296509 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.399143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.399174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.399183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.399195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.399204 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.501795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.501827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.501839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.501854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.501866 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.604696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.604751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.604764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.604780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.604794 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.707094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.707142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.707153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.707173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.707185 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.808985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.809019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.809031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.809047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.809058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.911773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.911801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.911808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.911820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.911829 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:21Z","lastTransitionTime":"2025-11-25T08:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.938130 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.938163 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:21 crc kubenswrapper[4760]: I1125 08:12:21.938237 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:21 crc kubenswrapper[4760]: E1125 08:12:21.938357 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:21 crc kubenswrapper[4760]: E1125 08:12:21.938426 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:21 crc kubenswrapper[4760]: E1125 08:12:21.938533 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.014313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.014370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.014380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.014394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.014404 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.117181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.117213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.117222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.117237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.117249 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.219484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.219523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.219532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.219546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.219554 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.322510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.322549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.322558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.322575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.322587 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.424616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.424674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.424686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.424706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.424720 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.527634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.527701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.527712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.527737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.527755 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.630068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.630113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.630124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.630141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.630153 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.732786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.732840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.732853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.732872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.732882 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.835162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.835202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.835213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.835229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.835240 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.937655 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.938093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.938129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.938140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.938162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:22 crc kubenswrapper[4760]: I1125 08:12:22.938176 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:22Z","lastTransitionTime":"2025-11-25T08:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:22 crc kubenswrapper[4760]: E1125 08:12:22.940720 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.043251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.043325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.043334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.043348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.043356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.145963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.146015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.146034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.146057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.146075 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.229483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.229530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.229543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.229559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.229571 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.241083 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.244590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.244619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.244628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.244642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.244651 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.262238 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.266188 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.266284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.266312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.266340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.266362 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.278809 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.281951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.281985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.281994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.282007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.282017 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.294324 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.297867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.297945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.297961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.297980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.297992 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.311173 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.311344 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.312998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.313035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.313049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.313070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.313085 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.336711 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/0.log" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.336757 4760 generic.go:334] "Generic (PLEG): container finished" podID="29261de0-ae0c-4828-afed-e6036aa367cf" containerID="c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff" exitCode=1 Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.336789 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerDied","Data":"c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.337181 4760 scope.go:117] "RemoveContainer" containerID="c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.350324 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.367513 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.379711 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.391716 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.402776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.412718 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.415226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.415266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.415278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.415293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.415305 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.423955 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.435323 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.447407 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.459377 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.473168 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.490728 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.502872 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.514363 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.517699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.517753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.517765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.517780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.517791 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.526369 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.537664 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.549933 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.569900 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:23Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.620172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.620214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.620228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.620246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.620277 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.722382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.722438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.722454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.722474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.722487 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.825399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.825447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.825460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.825479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.825491 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.927442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.927469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.927477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.927490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.927498 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:23Z","lastTransitionTime":"2025-11-25T08:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.938262 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.938348 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.938394 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.938431 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:23 crc kubenswrapper[4760]: I1125 08:12:23.938463 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:23 crc kubenswrapper[4760]: E1125 08:12:23.938498 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.029553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.029602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.029614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.029629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.029638 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.132381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.132412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.132422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.132438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.132450 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.235415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.235457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.235495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.235522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.235535 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.338759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.338827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.338864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.338901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.338927 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.343699 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/0.log" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.343757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerStarted","Data":"ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.365209 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.390148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.407745 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.421909 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.434713 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.440821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.440857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.440878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.440894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.440906 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.448440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.461342 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.473376 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.483801 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.492978 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.501927 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.514445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.524571 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.533896 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.542137 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.542807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.542844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.542855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.542870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.542881 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.551475 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.564198 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.573823 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:24Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.645536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.645571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.645582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.645598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.645610 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.748977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.749038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.749054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.749078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.749095 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.851659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.851694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.851703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.851717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.851729 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.938238 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:24 crc kubenswrapper[4760]: E1125 08:12:24.938372 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.953684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.953733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.953748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.953767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:24 crc kubenswrapper[4760]: I1125 08:12:24.953781 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:24Z","lastTransitionTime":"2025-11-25T08:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.056502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.056556 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.056571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.056592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.056606 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.159596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.159670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.159689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.159712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.159726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.265842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.265875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.265883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.265895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.265904 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.367943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.367968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.367976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.367988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.367995 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.471101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.471166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.471204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.471238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.471299 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.574344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.574402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.574420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.574442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.574459 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.677271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.677319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.677340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.677361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.677374 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.780365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.780424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.780437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.780457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.780471 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.883660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.883703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.883712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.883730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.883739 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.938108 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.938180 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:25 crc kubenswrapper[4760]: E1125 08:12:25.938244 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.938186 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:25 crc kubenswrapper[4760]: E1125 08:12:25.938352 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:25 crc kubenswrapper[4760]: E1125 08:12:25.938492 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.986291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.986632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.986785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.986912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:25 crc kubenswrapper[4760]: I1125 08:12:25.987000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:25Z","lastTransitionTime":"2025-11-25T08:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.089681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.089787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.089799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.089813 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.089824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.191637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.191741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.191759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.191780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.191794 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.293822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.294081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.294152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.294229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.294345 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.397379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.397438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.397448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.397461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.397471 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.499408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.499435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.499442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.499457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.499466 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.601907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.601956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.601970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.601983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.601994 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.704387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.704423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.704434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.704450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.704461 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.807943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.808283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.808378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.808479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.808582 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.911754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.911833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.911855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.911877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.911893 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:26Z","lastTransitionTime":"2025-11-25T08:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.938337 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:26 crc kubenswrapper[4760]: E1125 08:12:26.938556 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.955934 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:26Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.969676 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:26Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:26 crc kubenswrapper[4760]: I1125 08:12:26.983387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:26Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.003991 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.014575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.014610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.014622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.014636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.014646 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.023683 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.037084 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.047999 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.062540 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.074050 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.085174 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.097884 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.108297 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.117607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.117674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.117690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.117707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.117771 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.119819 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.130969 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.141220 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.154612 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.168148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.181063 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:27Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.219952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.219984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.219994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.220007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.220016 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.322461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.322500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.322509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.322522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.322532 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.424575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.425193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.425339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.425436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.425532 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.527693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.527723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.527731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.527743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.527752 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.629955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.629992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.630000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.630015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.630026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.732421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.732471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.732485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.732500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.732510 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.835764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.835809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.835819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.835834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.835843 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.937842 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.937883 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.937859 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.938000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:27 crc kubenswrapper[4760]: E1125 08:12:27.938009 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.938028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.938062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.938083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:27 crc kubenswrapper[4760]: I1125 08:12:27.938101 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:27Z","lastTransitionTime":"2025-11-25T08:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:27 crc kubenswrapper[4760]: E1125 08:12:27.938124 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:27 crc kubenswrapper[4760]: E1125 08:12:27.938198 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.040739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.040794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.040805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.040823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.040836 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.144534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.144594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.144607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.144626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.144640 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.247882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.247958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.247973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.247999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.248015 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.350873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.350919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.350930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.350948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.350961 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.458277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.458335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.458347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.458368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.458381 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.561482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.561553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.561571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.561599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.561617 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.664492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.664523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.664531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.664544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.664553 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.767031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.767355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.767453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.767555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.767643 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.870339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.870385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.870396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.870413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.870424 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.937502 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:28 crc kubenswrapper[4760]: E1125 08:12:28.937640 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.973558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.973855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.973953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.974052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:28 crc kubenswrapper[4760]: I1125 08:12:28.974139 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:28Z","lastTransitionTime":"2025-11-25T08:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.076499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.076574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.076599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.076629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.076651 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.180382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.180461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.180473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.180493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.180504 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.282986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.283293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.283368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.283441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.283502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.385809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.385838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.385846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.385860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.385868 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.488571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.488602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.488611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.488624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.488633 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.591120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.591171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.591196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.591220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.591235 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.694894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.694946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.694961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.694981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.694996 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.797765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.797885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.797907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.797927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.797939 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.804538 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.804684 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:33.804657347 +0000 UTC m=+147.513688152 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.804734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.804844 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.804906 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.804986 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.804991 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:13:33.804966316 +0000 UTC m=+147.513997151 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.805040 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 08:13:33.805029968 +0000 UTC m=+147.514060763 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.900946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.901026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.901049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.901079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.901103 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:29Z","lastTransitionTime":"2025-11-25T08:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.905856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.905948 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906046 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906078 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906098 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906160 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906176 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 08:13:33.906152561 +0000 UTC m=+147.615183386 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906187 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906206 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.906322 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 08:13:33.906297575 +0000 UTC m=+147.615328400 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.938169 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.938228 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:29 crc kubenswrapper[4760]: I1125 08:12:29.938311 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.938426 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.938491 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:29 crc kubenswrapper[4760]: E1125 08:12:29.938559 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.003619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.003668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.003677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.003697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.003707 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.105903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.105958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.105972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.105989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.106002 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.208047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.208104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.208120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.208145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.208161 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.312481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.312551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.312570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.312593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.312613 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.415752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.415899 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.415974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.416002 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.416019 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.519418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.519474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.519495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.519522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.519543 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.622905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.622968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.622991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.623028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.623048 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.725784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.725827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.725835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.725848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.725856 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.828495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.828535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.828547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.828563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.828574 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.930849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.930927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.930937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.930950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.930958 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:30Z","lastTransitionTime":"2025-11-25T08:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.937413 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:30 crc kubenswrapper[4760]: E1125 08:12:30.937746 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:30 crc kubenswrapper[4760]: I1125 08:12:30.940024 4760 scope.go:117] "RemoveContainer" containerID="372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.034370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.034643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.034652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.034724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.034739 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.137884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.137923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.137935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.137950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.137963 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.239981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.240024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.240035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.240049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.240060 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.342432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.342485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.342500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.342518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.342528 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.364147 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/2.log" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.366829 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.367398 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.380430 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.391106 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.399886 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.414025 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.426293 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.435899 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.444852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.444901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.444918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.444939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.444955 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.447283 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.458181 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.469961 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.479242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.492329 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.509023 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.522091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.534659 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.547010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.547036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.547044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.547056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.547064 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.576131 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.586765 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.600407 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.621345 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:31Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.649073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.649129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.649145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.649165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.649180 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.751600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.751635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.751643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.751658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.751666 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.853370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.853420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.853430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.853445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.853456 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.937549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:31 crc kubenswrapper[4760]: E1125 08:12:31.937744 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.937932 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.937960 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:31 crc kubenswrapper[4760]: E1125 08:12:31.938016 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:31 crc kubenswrapper[4760]: E1125 08:12:31.938178 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.956209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.956308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.956327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.956363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:31 crc kubenswrapper[4760]: I1125 08:12:31.956398 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:31Z","lastTransitionTime":"2025-11-25T08:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.066843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.066910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.066935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.066963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.066985 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.170047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.170110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.170131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.170162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.170184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.274289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.274366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.274384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.274408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.274425 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.371142 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/3.log" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.371688 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/2.log" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.373854 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" exitCode=1 Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.373892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.373929 4760 scope.go:117] "RemoveContainer" containerID="372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.375826 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:12:32 crc kubenswrapper[4760]: E1125 08:12:32.376034 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.376052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.376097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.376112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.376126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.376137 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.390788 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.401779 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.410751 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.420365 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.430661 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.440242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.450069 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.460598 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.471488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.478692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.478763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.478776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.478793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.478805 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.480696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.492740 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.508822 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.521309 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.538785 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.550422 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.561625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.578197 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.580750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.580795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.580803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.580817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.580828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.597157 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://372560bc087f5d4e2d83eb9c5dd907077f6f79ac1c9647a37216e4ad37cd9a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:03Z\\\",\\\"message\\\":\\\"p: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1125 08:12:03.800184 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800187 6463 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800201 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-tj64g\\\\nI1125 08:12:03.800205 6463 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-fcnxs\\\\nI1125 08:12:03.800211 6463 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-tj64g in node crc\\\\nF1125 08:12:03.800141 6463 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:31Z\\\",\\\"message\\\":\\\"i/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727401 6841 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727467 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.727438 6841 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:12:31.727605 6841 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.728315 6841 factory.go:656] Stopping watch factory\\\\nI1125 08:12:31.746701 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 08:12:31.746740 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 08:12:31.746845 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 08:12:31.746900 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 08:12:31.747084 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:32Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.683078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.683135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.683146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.683165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.683176 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.785303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.785359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.785375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.785396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.785408 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.887145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.887185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.887200 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.887215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.887225 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.937999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:32 crc kubenswrapper[4760]: E1125 08:12:32.938147 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.989493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.990062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.990084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.990102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:32 crc kubenswrapper[4760]: I1125 08:12:32.990116 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:32Z","lastTransitionTime":"2025-11-25T08:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.092609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.092661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.092677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.092700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.092716 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.194995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.195061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.195078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.195098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.195109 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.297384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.297428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.297440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.297452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.297460 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.378693 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/3.log" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.382909 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.383186 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.399821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.399862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.399871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.399884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.399894 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.402705 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.415040 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.426876 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.443699 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.457273 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.470558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.470598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.470606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.470623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.470635 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.471479 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.482715 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.484188 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.488073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.488118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.488135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.488154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.488170 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.498309 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.500234 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.504204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.504265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.504275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.504292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.504301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.509709 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.514968 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.518881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.518977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.519000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.519030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.519051 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.525939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.536802 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.539539 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.540697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.540754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.540768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.540788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.540800 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.560668 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.560769 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.562651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.562677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.562685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.562699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.562710 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.563132 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.585859 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:31Z\\\",\\\"message\\\":\\\"i/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727401 6841 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727467 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.727438 6841 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:12:31.727605 6841 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.728315 6841 factory.go:656] Stopping watch factory\\\\nI1125 08:12:31.746701 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 08:12:31.746740 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 08:12:31.746845 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 08:12:31.746900 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 08:12:31.747084 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.609266 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.618961 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.629593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.643870 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.655920 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:33Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.664572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.664619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.664631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.664651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.664665 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.770587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.770644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.770659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.770674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.770685 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.873349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.873446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.873480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.873511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.873534 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.938332 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.938385 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.938416 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.938491 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.938606 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:33 crc kubenswrapper[4760]: E1125 08:12:33.938731 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.976419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.976476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.976493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.976516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:33 crc kubenswrapper[4760]: I1125 08:12:33.976532 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:33Z","lastTransitionTime":"2025-11-25T08:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.079663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.079751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.079776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.079805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.079829 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.183211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.183309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.183334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.183363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.183384 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.285221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.285290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.285301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.285319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.285332 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.387579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.387624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.387635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.387650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.387662 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.490902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.490957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.490974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.490996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.491012 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.593104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.593155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.593166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.593183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.593195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.696150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.696199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.696208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.696222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.696232 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.798474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.798518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.798527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.798541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.798553 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.900316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.900352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.900360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.900372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.900380 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:34Z","lastTransitionTime":"2025-11-25T08:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:34 crc kubenswrapper[4760]: I1125 08:12:34.937556 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:34 crc kubenswrapper[4760]: E1125 08:12:34.937689 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.003130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.003166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.003177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.003192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.003204 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.105437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.105507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.105517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.105530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.105539 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.207615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.207657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.207668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.207683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.207694 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.310092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.310158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.310176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.310202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.310220 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.413166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.413244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.413314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.413348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.413370 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.515919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.516001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.516040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.516074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.516101 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.619577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.619671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.619685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.619747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.619766 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.723305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.723384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.723404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.723435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.723474 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.826329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.826429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.826460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.826493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.826516 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.929739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.929802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.929825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.929851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.929875 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:35Z","lastTransitionTime":"2025-11-25T08:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.938098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.938168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:35 crc kubenswrapper[4760]: I1125 08:12:35.938098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:35 crc kubenswrapper[4760]: E1125 08:12:35.938294 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:35 crc kubenswrapper[4760]: E1125 08:12:35.938372 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:35 crc kubenswrapper[4760]: E1125 08:12:35.938443 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.032469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.032525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.032545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.032575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.032596 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.140347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.140417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.140434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.140456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.140472 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.243889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.243940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.243956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.243981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.243998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.347643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.347698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.347708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.347722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.347733 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.449841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.449911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.449948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.449980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.450006 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.553468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.553545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.553564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.553590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.553608 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.656577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.656650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.656674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.656701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.656722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.759063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.759112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.759124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.759143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.759155 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.861862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.861915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.861933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.861957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.861976 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.938573 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:36 crc kubenswrapper[4760]: E1125 08:12:36.938925 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.956975 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.965942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.966090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.966111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.966136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.966200 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:36Z","lastTransitionTime":"2025-11-25T08:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.975556 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:36 crc kubenswrapper[4760]: I1125 08:12:36.990013 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:36Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.004449 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.027072 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:31Z\\\",\\\"message\\\":\\\"i/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727401 6841 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727467 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.727438 6841 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:12:31.727605 6841 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.728315 6841 factory.go:656] Stopping watch factory\\\\nI1125 08:12:31.746701 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 08:12:31.746740 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 08:12:31.746845 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 08:12:31.746900 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 08:12:31.747084 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.048007 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.064839 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.070745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.070778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.070787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.070811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.070828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.076863 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.088618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.099954 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.109135 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.119528 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.130520 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.142586 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.153588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.162874 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.172578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.172617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.172625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.172639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.172650 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.173942 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.182616 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:37Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.275162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.275228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.275265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.275293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.275308 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.378425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.378703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.378713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.378727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.378736 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.480926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.480967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.480982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.481007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.481022 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.583892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.583971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.583986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.584003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.584045 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.686419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.686493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.686506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.686521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.686529 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.789482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.789517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.789526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.789541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.789550 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.891545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.891591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.891603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.891620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.891633 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.938044 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.938134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:37 crc kubenswrapper[4760]: E1125 08:12:37.938167 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:37 crc kubenswrapper[4760]: E1125 08:12:37.938282 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.938366 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:37 crc kubenswrapper[4760]: E1125 08:12:37.938690 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.952515 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.995345 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.995413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.995438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.995469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:37 crc kubenswrapper[4760]: I1125 08:12:37.995492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:37Z","lastTransitionTime":"2025-11-25T08:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.098220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.098272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.098287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.098303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.098315 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.201317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.201357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.201373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.201389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.201400 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.304196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.304283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.304301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.304324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.304340 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.406111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.406142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.406150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.406163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.406172 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.508758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.508812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.508824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.508856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.508871 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.611823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.611867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.611875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.611889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.611898 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.714277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.714340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.714351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.714369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.714381 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.817116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.817152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.817170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.817192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.817203 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.919940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.919982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.919999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.920016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.920028 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:38Z","lastTransitionTime":"2025-11-25T08:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:38 crc kubenswrapper[4760]: I1125 08:12:38.937531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:38 crc kubenswrapper[4760]: E1125 08:12:38.937660 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.022301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.022335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.022345 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.022360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.022371 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.125486 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.125520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.125532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.125546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.125567 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.228119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.228194 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.228222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.228272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.228294 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.330392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.330434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.330443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.330458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.330467 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.432959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.433004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.433015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.433032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.433042 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.535487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.535563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.535579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.535602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.535617 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.637787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.637819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.637826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.637838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.637848 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.740903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.740942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.740955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.740971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.740982 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.842669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.842708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.842716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.842729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.842738 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.938053 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.938128 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.938157 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:39 crc kubenswrapper[4760]: E1125 08:12:39.938186 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:39 crc kubenswrapper[4760]: E1125 08:12:39.938297 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:39 crc kubenswrapper[4760]: E1125 08:12:39.938375 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.944613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.944694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.944709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.944723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:39 crc kubenswrapper[4760]: I1125 08:12:39.944732 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:39Z","lastTransitionTime":"2025-11-25T08:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.047104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.047142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.047154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.047171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.047183 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.149887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.149929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.149940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.149957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.149968 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.253066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.253147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.253169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.253195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.253213 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.356273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.356343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.356355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.356375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.356387 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.458737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.458778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.458792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.458809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.458821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.570384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.570423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.570439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.570453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.570463 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.672708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.672753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.672767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.672787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.672805 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.774923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.774960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.774972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.774988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.774998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.877436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.877470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.877479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.877493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.877503 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.937697 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:40 crc kubenswrapper[4760]: E1125 08:12:40.937830 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.979301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.979351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.979367 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.979388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:40 crc kubenswrapper[4760]: I1125 08:12:40.979402 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:40Z","lastTransitionTime":"2025-11-25T08:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.083050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.083099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.083115 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.083132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.083144 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.187723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.187794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.187818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.187850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.187872 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.290545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.290582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.290592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.290609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.290620 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.396489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.396550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.396568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.396592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.396609 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.498754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.498792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.498803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.498822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.498832 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.602033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.602073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.602083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.602101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.602113 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.704263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.704295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.704304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.704320 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.704329 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.806690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.806729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.806741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.806756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.806765 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.909340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.909398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.909407 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.909422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.909439 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:41Z","lastTransitionTime":"2025-11-25T08:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.937887 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:41 crc kubenswrapper[4760]: E1125 08:12:41.938003 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.938060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:41 crc kubenswrapper[4760]: E1125 08:12:41.938108 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:41 crc kubenswrapper[4760]: I1125 08:12:41.938138 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:41 crc kubenswrapper[4760]: E1125 08:12:41.938193 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.011502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.011545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.011553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.011568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.011576 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.113946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.114319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.114435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.114522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.114593 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.216799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.216862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.216885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.216912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.216934 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.319467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.319567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.319585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.319610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.319628 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.421819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.421893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.421908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.421932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.421953 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.524300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.524403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.524414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.524439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.524454 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.627364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.627439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.627456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.627488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.627502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.730010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.730059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.730070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.730086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.730097 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.832801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.832899 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.832919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.832941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.832958 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.936003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.936055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.936066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.936081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.936092 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:42Z","lastTransitionTime":"2025-11-25T08:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:42 crc kubenswrapper[4760]: I1125 08:12:42.938378 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:42 crc kubenswrapper[4760]: E1125 08:12:42.938599 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.038093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.038140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.038151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.038167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.038181 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.140495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.140536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.140548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.140566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.140580 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.243363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.243424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.243436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.243457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.243469 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.346675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.346739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.346756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.346783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.346803 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.448798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.448847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.448859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.448879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.448892 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.551376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.551425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.551436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.551453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.551464 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.654019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.654061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.654070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.654085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.654095 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.720160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.720284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.720312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.720341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.720361 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.735344 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.739693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.739723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.739733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.739749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.739760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.757682 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.761751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.761797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.761809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.761825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.761839 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.785826 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.793789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.793835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.793846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.793865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.793879 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.814268 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.820553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.820598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.820610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.820629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.820642 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.836137 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:43Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.836398 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.838359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.838419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.838434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.838454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.838468 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.937992 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.938060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.938127 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.938165 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.938240 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:43 crc kubenswrapper[4760]: E1125 08:12:43.938316 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.940814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.940839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.940848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.940859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:43 crc kubenswrapper[4760]: I1125 08:12:43.940873 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:43Z","lastTransitionTime":"2025-11-25T08:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.043754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.043805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.043820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.043839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.043854 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.147121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.147196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.147220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.147304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.147328 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.250800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.250872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.250898 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.250929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.250952 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.353151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.353224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.353284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.353330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.353353 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.456112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.456712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.456724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.456738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.456749 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.559360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.559404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.559414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.559428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.559437 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.662027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.662083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.662096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.662113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.662124 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.764203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.764243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.764275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.764294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.764309 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.866786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.866835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.866846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.866860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.866870 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.937908 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:44 crc kubenswrapper[4760]: E1125 08:12:44.938087 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.938829 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:12:44 crc kubenswrapper[4760]: E1125 08:12:44.939098 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.970101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.970167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.970186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.970207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:44 crc kubenswrapper[4760]: I1125 08:12:44.970220 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:44Z","lastTransitionTime":"2025-11-25T08:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.072643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.072749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.072765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.072818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.072832 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.175023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.175068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.175079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.175097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.175109 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.277430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.277481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.277492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.277508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.277518 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.380035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.380070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.380078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.380091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.380100 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.482165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.482213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.482221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.482238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.482275 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.584665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.584711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.584721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.584737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.584748 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.687505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.687551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.687561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.687573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.687581 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.789791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.789828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.789841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.789857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.789868 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.892580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.892627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.892638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.892657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.892670 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.938232 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.938290 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.938386 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:45 crc kubenswrapper[4760]: E1125 08:12:45.938516 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:45 crc kubenswrapper[4760]: E1125 08:12:45.938636 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:45 crc kubenswrapper[4760]: E1125 08:12:45.938776 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.995072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.995125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.995136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.995155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:45 crc kubenswrapper[4760]: I1125 08:12:45.995169 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:45Z","lastTransitionTime":"2025-11-25T08:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.097299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.097347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.097362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.097379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.097391 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.199588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.199615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.199623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.199634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.199643 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.301693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.301738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.301792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.301809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.301820 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.404692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.404722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.404730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.404743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.404750 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.507741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.507785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.507796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.507812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.507824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.611313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.611351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.611363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.611378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.611389 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.714555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.714616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.714638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.714665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.714795 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.817408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.817441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.817457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.817472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.817481 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.920430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.920469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.920479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.920496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.920508 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:46Z","lastTransitionTime":"2025-11-25T08:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.938213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:46 crc kubenswrapper[4760]: E1125 08:12:46.938406 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.955589 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.971454 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5366e35-adc6-45e2-966c-55fc7e6c8b79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://325aa44f95c97b92bc58673d67059446c90e48a1e2acc5136a6efe26d098035a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9406a981f5e162cbfa3d88ccc8b75e6a2ab00d45a914ab939682a0a32ca950d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1f21c49d59d077e5e24db28775bcd7fe571b28717539bfe6cf6d3e4406dced7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d191a668d54e44f5a92d0f8cabaf04246c126fcabd931e81cdfc304de55f162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad31150ba890dede85cd789519b5bc76a686bbc3f304356f0beda4221f044c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a65522d9dbe90e3af1d0fa60ae15baed497b5bd759eee4af1fa62baa6a9d96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea7b9125bd35629d6b3e686824953e877de02f7b069fc9fae0a415ae34f7938f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wbn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r4rlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:46 crc kubenswrapper[4760]: I1125 08:12:46.997807 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"244c5c71-3110-4dcd-89f3-4dadfc405131\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:31Z\\\",\\\"message\\\":\\\"i/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727401 6841 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 08:12:31.727467 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.727438 6841 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 08:12:31.727605 6841 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 08:12:31.728315 6841 factory.go:656] Stopping watch factory\\\\nI1125 08:12:31.746701 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 08:12:31.746740 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 08:12:31.746845 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 08:12:31.746900 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 08:12:31.747084 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:12:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fk6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c2bhp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:46Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.009546 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"721c148c-13a9-4044-8fbf-4c8f88ee4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b7d3921db01c5969f16ede70d3ff767417330f708b885d315e3ea1b4cc155f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4549d34f0a64cd73b6e0c7155b9d08507cd6fa52d606800e4fd1859a9d54c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4549d34f0a64cd73b6e0c7155b9d08507cd6fa52d606800e4fd1859a9d54c2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.023119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.023158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.023169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.023185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.023197 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.029895 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"598be8ab-207c-4d11-bcf2-28cef33f1532\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4bedbe1e48c01d54c5ca10fd124499ff53ea775e4d414845fc681ee0c89f9a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c48bcd6b4d16fb3d8a1cc9cb5f44f16dcf129baab7eed613d8031ed28700dcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95a94e513519b7f3353c76ae3ce05418ebe3e1fe662130bd6b3a34dbdb33b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://add8c1877b2da3090351d8d5c1ccae11587173cff0fa2c999894e8c21e5e6bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6961ab17248000f0584935c4ae90015d99e193c1af4666ed0ef88b99b79ee34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a18b78035ebd09557fd242ed07881326cd757bbe315cc7cf614e2939203f46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c8e4b32d7790c757d9da5c7ecf45947a15eceea3803460bdc7cf7901ef1825\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a2a92e7b83320fc44a1d4a685d077f7b89cfd0bb690a55435391582d6acbbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.043729 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1e0cae-103c-4c99-bfde-5c974e0d674c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 08:11:25.347002 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 08:11:25.347146 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 08:11:25.351460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1497528051/tls.crt::/tmp/serving-cert-1497528051/tls.key\\\\\\\"\\\\nI1125 08:11:25.657445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 08:11:25.662299 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 08:11:25.662326 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 08:11:25.662348 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 08:11:25.662353 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 08:11:25.680867 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1125 08:11:25.680891 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 08:11:25.680900 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680906 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 08:11:25.680914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 08:11:25.680919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 08:11:25.680923 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 08:11:25.680927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1125 08:11:25.682146 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.056360 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.068536 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677919c6bf87b212eb6f149848462786e386243e70871c17ee17831611dea42a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a587f17546ca37ae30e0784a06c7cf3cb78742ac2697037deb66bf61df8c4476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.083116 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.096047 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x6n7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29261de0-ae0c-4828-afed-e6036aa367cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T08:12:23Z\\\",\\\"message\\\":\\\"2025-11-25T08:11:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2\\\\n2025-11-25T08:11:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8dc7d26c-568c-4e55-9a19-b36d2cba74e2 to /host/opt/cni/bin/\\\\n2025-11-25T08:11:38Z [verbose] multus-daemon started\\\\n2025-11-25T08:11:38Z [verbose] Readiness Indicator file check\\\\n2025-11-25T08:12:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjjr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x6n7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.105968 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nlwcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d43a69a9-eef6-4091-b9fd-9bc0a283df79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49948b0bde3f4172c0a5df6217c5d2e0ab1a49078e5cb76e2e35c6f087afacd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6ml2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nlwcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.116166 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaf3f00-2bbd-4217-9414-5a6759e72b60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvxr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v2qd9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.124988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.125030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.125039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.125052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.125060 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.128074 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2224b9f9-5bee-4a60-861e-2b94a047882a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25c678216e27cb4a19c9b550286bd825b1379ede7a5d1bd5b51cee67df0fb4ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754b4a950e901dce366a89d00ee5eb2ecf5361459a870b103a28872f9e5a203b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abc86e66f739e21f8a54f1f1a92a1c57ab6e72c12d0370ce77c00a8f53f4344d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.139228 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1562914b-41a5-4262-a615-6e81861486aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a869e9e0af9fb96536b050be20092a079d6773d3492a2fabd0a13207b35dda79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b903f68b344733e36f70133a143eeb34ea831f53c46dd6c6d70722431321d9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e65a5496087acdb8ec7a77c5e9cc07f5b52ff52d53c076565c10f126ec350f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02342f6ccdbca7de2d4e061d059de377bc77a6919a51d02dbb20416842733051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T08:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T08:11:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.149436 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24a1af6b5785b0b50fb591ee62e6fe5aef41923194c7b4887dc821028144150d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.158916 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f5c9247-0023-4cef-8299-ca90407f76f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bb4d365eccb989c4ea09db0e59730a8e85d115376e03278091466a9b27f52b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wcvdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fcnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.169608 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38a058ae-552b-4862-a55a-2cd1c775e77a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4019caf8bacdbf99dc81758188274880f4f9b03ab7c83b09b5e3e0685c4ca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7ce3d508fd943c5e62dd5a6533191d5eae6c685171e1efc8fcab29f5ac6203b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpp4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c8n4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.182621 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb0f7317a4a81112d42355eee0aa2d374fbd7b8d87407e9716182757c38afb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.194044 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tj64g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06641e4d-9c74-4b5c-a664-d4f00118885a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T08:11:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8cc194bf129405f1f7be9368a0eb2fa2991b3795bb209954c0ab3bbdfc1ed30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T08:11:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhd7w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T08:11:31Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tj64g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:47Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.228290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.228440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.228462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.228486 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.228534 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.331327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.331370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.331383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.331399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.331410 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.433525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.433569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.433580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.433595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.433605 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.536002 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.536045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.536056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.536074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.536085 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.638236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.638288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.638297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.638313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.638321 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.740841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.740896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.740909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.740926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.740940 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.842966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.843010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.843030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.843050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.843064 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.938057 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.938133 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.938197 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:47 crc kubenswrapper[4760]: E1125 08:12:47.938381 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:47 crc kubenswrapper[4760]: E1125 08:12:47.938503 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:47 crc kubenswrapper[4760]: E1125 08:12:47.938605 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.945674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.945720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.945755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.945772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:47 crc kubenswrapper[4760]: I1125 08:12:47.945785 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:47Z","lastTransitionTime":"2025-11-25T08:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.048110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.048475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.048611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.048733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.048835 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.151765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.151805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.151816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.151830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.151838 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.254207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.254278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.254290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.254310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.254328 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.356810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.356847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.356855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.356868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.356878 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.458768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.458794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.458803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.458815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.458824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.561440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.561720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.561732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.561746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.561756 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.664992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.665027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.665037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.665050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.665059 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.767807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.767851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.767862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.767878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.767887 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.870547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.870574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.870600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.870613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.870622 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.937767 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:48 crc kubenswrapper[4760]: E1125 08:12:48.937950 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.972759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.972790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.972798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.972811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:48 crc kubenswrapper[4760]: I1125 08:12:48.972821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:48Z","lastTransitionTime":"2025-11-25T08:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.074701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.074942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.075138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.075340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.075732 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.178232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.178295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.178306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.178322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.178333 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.281422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.281481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.281494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.281513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.281526 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.384189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.384227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.384237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.384288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.384300 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.487849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.487897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.487908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.487924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.487937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.506970 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:49 crc kubenswrapper[4760]: E1125 08:12:49.507145 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:12:49 crc kubenswrapper[4760]: E1125 08:12:49.507279 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs podName:deaf3f00-2bbd-4217-9414-5a6759e72b60 nodeName:}" failed. No retries permitted until 2025-11-25 08:13:53.507226431 +0000 UTC m=+167.216257266 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs") pod "network-metrics-daemon-v2qd9" (UID: "deaf3f00-2bbd-4217-9414-5a6759e72b60") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.590931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.591191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.591276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.591349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.591424 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.693745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.693792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.693803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.693820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.693835 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.795853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.795900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.795912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.795927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.795937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.898058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.898104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.898115 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.898134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.898147 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:49Z","lastTransitionTime":"2025-11-25T08:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.937990 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:49 crc kubenswrapper[4760]: E1125 08:12:49.938416 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.938022 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:49 crc kubenswrapper[4760]: E1125 08:12:49.938633 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:49 crc kubenswrapper[4760]: I1125 08:12:49.937993 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:49 crc kubenswrapper[4760]: E1125 08:12:49.938810 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.000144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.000737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.000868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.000955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.001038 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.103550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.103592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.103602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.103618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.103629 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.205493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.205538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.205552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.205567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.205579 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.308376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.308417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.308429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.308444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.308455 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.410990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.411035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.411049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.411074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.411092 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.513644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.513693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.513702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.513717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.513727 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.616206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.616283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.616303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.616331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.616347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.718513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.718554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.718564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.718578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.718588 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.821161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.821233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.821273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.821290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.821301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.923933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.923971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.923980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.923995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.924005 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:50Z","lastTransitionTime":"2025-11-25T08:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:50 crc kubenswrapper[4760]: I1125 08:12:50.938300 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:50 crc kubenswrapper[4760]: E1125 08:12:50.938442 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.026699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.026734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.026761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.026784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.026801 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.128892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.128946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.128964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.128983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.128995 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.231931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.231995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.232006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.232047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.232061 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.335216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.335291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.335329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.335348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.335363 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.438096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.438172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.438190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.438215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.438237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.540453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.540490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.540500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.540516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.540526 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.642917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.642969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.642982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.643000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.643011 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.744918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.744961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.744974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.744990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.745000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.848282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.848335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.848343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.848362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.848373 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.938327 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.938416 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.938557 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:51 crc kubenswrapper[4760]: E1125 08:12:51.938643 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:51 crc kubenswrapper[4760]: E1125 08:12:51.938847 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:51 crc kubenswrapper[4760]: E1125 08:12:51.939033 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.951230 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.951303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.951317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.951336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:51 crc kubenswrapper[4760]: I1125 08:12:51.951349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:51Z","lastTransitionTime":"2025-11-25T08:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.053652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.053696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.053705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.053719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.053728 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.157023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.157072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.157087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.157109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.157123 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.260854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.260937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.260951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.260969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.260980 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.363977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.364024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.364039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.364058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.364072 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.466676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.466726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.466739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.466755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.466769 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.569102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.569163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.569177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.569196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.569211 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.671340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.671386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.671403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.671425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.671436 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.774604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.774675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.774699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.774728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.774751 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.876732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.876774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.876786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.876802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.876814 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.937644 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:52 crc kubenswrapper[4760]: E1125 08:12:52.937794 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.978988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.979023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.979031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.979045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:52 crc kubenswrapper[4760]: I1125 08:12:52.979054 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:52Z","lastTransitionTime":"2025-11-25T08:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.081362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.081409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.081424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.081445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.081461 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.183454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.183490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.183499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.183511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.183519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.285466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.285503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.285515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.285529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.285539 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.388105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.388141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.388152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.388167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.388177 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.491018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.491080 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.491098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.491122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.491145 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.593546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.593585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.593595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.593610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.593620 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.695352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.695578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.695685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.695765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.695838 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.798045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.798088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.798103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.798122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.798136 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.900578 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.900853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.900925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.900987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.901044 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:53Z","lastTransitionTime":"2025-11-25T08:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.937864 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:53 crc kubenswrapper[4760]: E1125 08:12:53.938206 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.938424 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:53 crc kubenswrapper[4760]: I1125 08:12:53.938493 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:53 crc kubenswrapper[4760]: E1125 08:12:53.938576 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:53 crc kubenswrapper[4760]: E1125 08:12:53.938707 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.003873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.003908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.003916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.003930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.003940 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.106569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.106649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.106677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.106706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.106729 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.136199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.136268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.136283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.136302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.136314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: E1125 08:12:54.147786 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.152293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.152349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.152370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.152398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.152417 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: E1125 08:12:54.167312 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.171518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.171767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.171930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.172070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.172189 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: E1125 08:12:54.187194 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.191299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.191342 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.191357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.191374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.191385 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: E1125 08:12:54.204325 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.207784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.207837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.207857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.207880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.207895 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: E1125 08:12:54.222922 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T08:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7123858-d8c0-4a9e-a959-9447d279982b\\\",\\\"systemUUID\\\":\\\"6bb7addf-227a-4139-b3ea-9499fe12a177\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T08:12:54Z is after 2025-08-24T17:21:41Z" Nov 25 08:12:54 crc kubenswrapper[4760]: E1125 08:12:54.223182 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.225101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.225262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.225286 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.225306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.225323 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.328048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.328438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.328620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.328774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.328929 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.431469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.431525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.431536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.431552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.431563 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.533811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.533861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.533873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.533890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.533902 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.636612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.636664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.636672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.636685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.636694 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.738821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.738858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.738865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.738878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.738886 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.841219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.841280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.841292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.841306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.841317 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.937907 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:54 crc kubenswrapper[4760]: E1125 08:12:54.938149 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.942603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.942646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.942656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.942671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:54 crc kubenswrapper[4760]: I1125 08:12:54.942681 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:54Z","lastTransitionTime":"2025-11-25T08:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.044688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.044724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.044735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.044753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.044769 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.148467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.148561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.148577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.148600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.148627 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.251176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.251218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.251229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.251259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.251272 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.354882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.354949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.354972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.355001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.355025 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.458189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.458274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.458287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.458303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.458314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.560956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.560987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.560996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.561009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.561017 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.663992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.664065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.664077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.664093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.664105 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.767439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.767492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.767502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.767526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.767539 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.870450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.870488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.870498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.870511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.870519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.938332 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.938332 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.938419 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:55 crc kubenswrapper[4760]: E1125 08:12:55.938501 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:55 crc kubenswrapper[4760]: E1125 08:12:55.938701 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:55 crc kubenswrapper[4760]: E1125 08:12:55.938853 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.972676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.972716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.972727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.972743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:55 crc kubenswrapper[4760]: I1125 08:12:55.972755 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:55Z","lastTransitionTime":"2025-11-25T08:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.075357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.075408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.075427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.075453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.075474 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.178569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.178600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.178610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.178626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.178637 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.280359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.280662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.280893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.281001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.281103 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.383343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.383375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.383383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.383398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.383407 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.485340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.485607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.485692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.485804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.485904 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.588848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.588886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.588894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.588909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.588919 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.691700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.691739 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.691748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.691763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.691773 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.795735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.796057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.796152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.796229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.796314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.898010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.898344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.898476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.898583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.898659 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:56Z","lastTransitionTime":"2025-11-25T08:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.937658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:56 crc kubenswrapper[4760]: E1125 08:12:56.938145 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.970537 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-x6n7t" podStartSLOduration=85.970519114 podStartE2EDuration="1m25.970519114s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:56.959004274 +0000 UTC m=+110.668035089" watchObservedRunningTime="2025-11-25 08:12:56.970519114 +0000 UTC m=+110.679549909" Nov 25 08:12:56 crc kubenswrapper[4760]: I1125 08:12:56.970681 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nlwcx" podStartSLOduration=85.970677349 podStartE2EDuration="1m25.970677349s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:56.970428611 +0000 UTC m=+110.679459416" watchObservedRunningTime="2025-11-25 08:12:56.970677349 +0000 UTC m=+110.679708144" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.001178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.001476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.001485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.001501 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.001519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.021104 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.021085146 podStartE2EDuration="1m3.021085146s" podCreationTimestamp="2025-11-25 08:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.008446853 +0000 UTC m=+110.717477648" watchObservedRunningTime="2025-11-25 08:12:57.021085146 +0000 UTC m=+110.730115941" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.043870 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podStartSLOduration=86.043852407 podStartE2EDuration="1m26.043852407s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.04327733 +0000 UTC m=+110.752308125" watchObservedRunningTime="2025-11-25 08:12:57.043852407 +0000 UTC m=+110.752883202" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.056818 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c8n4r" podStartSLOduration=86.056797659 podStartE2EDuration="1m26.056797659s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.056664945 +0000 UTC m=+110.765695760" watchObservedRunningTime="2025-11-25 08:12:57.056797659 +0000 UTC m=+110.765828454" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.090574 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.090553985 podStartE2EDuration="1m31.090553985s" podCreationTimestamp="2025-11-25 08:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.075239313 +0000 UTC m=+110.784270108" watchObservedRunningTime="2025-11-25 08:12:57.090553985 +0000 UTC m=+110.799584780" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.090677 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tj64g" podStartSLOduration=86.090672888 podStartE2EDuration="1m26.090672888s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.089832224 +0000 UTC m=+110.798863029" watchObservedRunningTime="2025-11-25 08:12:57.090672888 +0000 UTC m=+110.799703683" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.104143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.104187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.104198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.104213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.104225 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.132358 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.132340848 podStartE2EDuration="1m27.132340848s" podCreationTimestamp="2025-11-25 08:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.13175783 +0000 UTC m=+110.840788645" watchObservedRunningTime="2025-11-25 08:12:57.132340848 +0000 UTC m=+110.841371643" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.171133 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.171110871 podStartE2EDuration="1m31.171110871s" podCreationTimestamp="2025-11-25 08:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.150489143 +0000 UTC m=+110.859519938" watchObservedRunningTime="2025-11-25 08:12:57.171110871 +0000 UTC m=+110.880141666" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.206069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.206110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.206119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.206135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.206144 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.215598 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-r4rlz" podStartSLOduration=86.215580243 podStartE2EDuration="1m26.215580243s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.213992306 +0000 UTC m=+110.923023131" watchObservedRunningTime="2025-11-25 08:12:57.215580243 +0000 UTC m=+110.924611038" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.246630 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.246611759 podStartE2EDuration="20.246611759s" podCreationTimestamp="2025-11-25 08:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:12:57.2449829 +0000 UTC m=+110.954013695" watchObservedRunningTime="2025-11-25 08:12:57.246611759 +0000 UTC m=+110.955642554" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.307618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.307652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.307661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.307676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.307685 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.409794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.409830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.409839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.409853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.409863 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.512439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.512490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.512502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.512521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.512535 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.614635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.614669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.614680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.614694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.614704 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.717379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.717444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.717457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.717488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.717503 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.819946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.819996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.820011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.820029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.820043 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.922236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.922304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.922314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.922331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.922342 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:57Z","lastTransitionTime":"2025-11-25T08:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.937492 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.937519 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:57 crc kubenswrapper[4760]: I1125 08:12:57.937525 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:57 crc kubenswrapper[4760]: E1125 08:12:57.937628 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:57 crc kubenswrapper[4760]: E1125 08:12:57.937747 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:57 crc kubenswrapper[4760]: E1125 08:12:57.937822 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.024683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.024737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.024768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.024794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.024814 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.127802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.127845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.127853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.127870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.127879 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.230594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.230674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.230694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.230720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.230738 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.332901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.332952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.332964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.332980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.332990 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.435507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.435544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.435557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.435573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.435587 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.537679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.537711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.537718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.537731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.537740 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.639967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.640001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.640012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.640025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.640035 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.742642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.742729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.742744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.742767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.742784 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.846875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.846913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.846923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.846938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.846950 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.937937 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:12:58 crc kubenswrapper[4760]: E1125 08:12:58.938709 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.938975 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:12:58 crc kubenswrapper[4760]: E1125 08:12:58.939234 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c2bhp_openshift-ovn-kubernetes(244c5c71-3110-4dcd-89f3-4dadfc405131)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.949453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.949551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.949567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.949582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:58 crc kubenswrapper[4760]: I1125 08:12:58.949596 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:58Z","lastTransitionTime":"2025-11-25T08:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.052430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.052461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.052468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.052481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.052491 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.154086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.154159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.154182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.154210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.154230 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.257624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.257673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.257684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.257702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.257715 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.360209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.360260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.360268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.360280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.360289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.462282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.462317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.462326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.462338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.462347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.564966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.565049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.565075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.565104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.565124 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.667067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.667281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.667310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.667332 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.667347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.769808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.770063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.770082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.770101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.770112 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.872293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.872330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.872342 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.872359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.872371 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.938213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.938272 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.938272 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:12:59 crc kubenswrapper[4760]: E1125 08:12:59.938337 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:12:59 crc kubenswrapper[4760]: E1125 08:12:59.938492 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:12:59 crc kubenswrapper[4760]: E1125 08:12:59.938599 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.976839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.976882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.976893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.976909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:12:59 crc kubenswrapper[4760]: I1125 08:12:59.976920 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:12:59Z","lastTransitionTime":"2025-11-25T08:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.079384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.079417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.079425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.079438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.079447 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.184359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.184400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.184410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.184422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.184431 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.287080 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.287130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.287145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.287164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.287178 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.390176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.390223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.390237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.390274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.390286 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.492372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.492625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.492686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.492750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.492815 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.596300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.596354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.596371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.596396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.596412 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.699601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.700192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.700430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.700648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.700787 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.803093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.803129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.803142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.803155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.803163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.906373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.907052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.907327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.907580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.907807 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:00Z","lastTransitionTime":"2025-11-25T08:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:00 crc kubenswrapper[4760]: I1125 08:13:00.938597 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:00 crc kubenswrapper[4760]: E1125 08:13:00.938779 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.010566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.010611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.010630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.010646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.010657 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.113666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.113705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.113717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.113733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.113747 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.215358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.215390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.215398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.215410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.215421 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.318583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.318650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.318670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.318699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.318718 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.421540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.421593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.421607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.421624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.421639 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.527096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.527355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.527458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.527535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.527613 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.634880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.635195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.635489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.635740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.635987 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.738460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.738493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.738502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.738518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.738529 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.841019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.841422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.841533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.841642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.841756 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.938263 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:01 crc kubenswrapper[4760]: E1125 08:13:01.938384 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.938414 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.938511 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:01 crc kubenswrapper[4760]: E1125 08:13:01.938571 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:01 crc kubenswrapper[4760]: E1125 08:13:01.938657 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.944149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.944191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.944203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.944217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:01 crc kubenswrapper[4760]: I1125 08:13:01.944230 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:01Z","lastTransitionTime":"2025-11-25T08:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.046176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.046223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.046235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.046284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.046298 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.148903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.148944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.148971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.148986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.148995 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.251454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.251525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.251550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.251574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.251590 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.353472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.353511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.353525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.353548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.353563 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.456409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.456452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.456462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.456481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.456492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.558917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.558985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.559028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.559065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.559089 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.662220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.662283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.662296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.662312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.662325 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.764298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.764605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.764619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.764634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.764645 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.866954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.867007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.867020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.867034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.867043 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.938426 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:02 crc kubenswrapper[4760]: E1125 08:13:02.938591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.969584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.969651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.969665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.969686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:02 crc kubenswrapper[4760]: I1125 08:13:02.969700 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:02Z","lastTransitionTime":"2025-11-25T08:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.071990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.072534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.072629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.072721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.072798 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.176436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.176490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.176500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.176518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.176531 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.278369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.278408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.278418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.278435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.278446 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.380613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.381217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.381330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.381408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.381485 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.483418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.483463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.483472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.483488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.483498 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.586375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.586454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.586484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.586516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.586540 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.689152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.689193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.689204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.689219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.689229 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.792598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.792669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.792694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.792724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.792746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.895387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.895654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.895736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.895834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.895921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.937894 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.937913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:03 crc kubenswrapper[4760]: E1125 08:13:03.938365 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.937963 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:03 crc kubenswrapper[4760]: E1125 08:13:03.938383 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:03 crc kubenswrapper[4760]: E1125 08:13:03.938626 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.998865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.998926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.998960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.999001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:03 crc kubenswrapper[4760]: I1125 08:13:03.999025 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:03Z","lastTransitionTime":"2025-11-25T08:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.101797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.101864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.101882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.101905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.101928 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:04Z","lastTransitionTime":"2025-11-25T08:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.205154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.205198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.205207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.205227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.205240 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:04Z","lastTransitionTime":"2025-11-25T08:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.308529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.308610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.308634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.308674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.308696 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:04Z","lastTransitionTime":"2025-11-25T08:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.411622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.411686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.411707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.411751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.411787 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:04Z","lastTransitionTime":"2025-11-25T08:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.430493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.430816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.430971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.431150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.431322 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T08:13:04Z","lastTransitionTime":"2025-11-25T08:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.481945 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr"] Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.482804 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.485572 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.485901 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.486017 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.486100 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.581242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/213eb04c-19a8-4ea5-91d0-0720ebba019c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.581380 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/213eb04c-19a8-4ea5-91d0-0720ebba019c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.581427 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/213eb04c-19a8-4ea5-91d0-0720ebba019c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.581447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/213eb04c-19a8-4ea5-91d0-0720ebba019c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.581467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/213eb04c-19a8-4ea5-91d0-0720ebba019c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.682380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/213eb04c-19a8-4ea5-91d0-0720ebba019c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.682426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/213eb04c-19a8-4ea5-91d0-0720ebba019c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.682444 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/213eb04c-19a8-4ea5-91d0-0720ebba019c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.682459 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/213eb04c-19a8-4ea5-91d0-0720ebba019c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.682482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/213eb04c-19a8-4ea5-91d0-0720ebba019c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.682828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/213eb04c-19a8-4ea5-91d0-0720ebba019c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.683267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/213eb04c-19a8-4ea5-91d0-0720ebba019c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.683702 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/213eb04c-19a8-4ea5-91d0-0720ebba019c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.688740 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/213eb04c-19a8-4ea5-91d0-0720ebba019c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.699267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/213eb04c-19a8-4ea5-91d0-0720ebba019c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fwpkr\" (UID: \"213eb04c-19a8-4ea5-91d0-0720ebba019c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.800223 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" Nov 25 08:13:04 crc kubenswrapper[4760]: I1125 08:13:04.937571 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:04 crc kubenswrapper[4760]: E1125 08:13:04.937730 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:05 crc kubenswrapper[4760]: I1125 08:13:05.512160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" event={"ID":"213eb04c-19a8-4ea5-91d0-0720ebba019c","Type":"ContainerStarted","Data":"ce4edd18913ed4608ad605d7661753e5718d9d300cd471bed5ced61bd897fbfc"} Nov 25 08:13:05 crc kubenswrapper[4760]: I1125 08:13:05.512217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" event={"ID":"213eb04c-19a8-4ea5-91d0-0720ebba019c","Type":"ContainerStarted","Data":"2e9bc855f65882227f71cc3b87f309b2241f71f6a7839976d0863a36056d4baf"} Nov 25 08:13:05 crc kubenswrapper[4760]: I1125 08:13:05.938195 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:05 crc kubenswrapper[4760]: I1125 08:13:05.938209 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:05 crc kubenswrapper[4760]: E1125 08:13:05.938739 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:05 crc kubenswrapper[4760]: E1125 08:13:05.938827 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:05 crc kubenswrapper[4760]: I1125 08:13:05.938226 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:05 crc kubenswrapper[4760]: E1125 08:13:05.938963 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:06 crc kubenswrapper[4760]: I1125 08:13:06.938717 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:06 crc kubenswrapper[4760]: E1125 08:13:06.940031 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:06 crc kubenswrapper[4760]: E1125 08:13:06.977736 4760 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 08:13:07 crc kubenswrapper[4760]: E1125 08:13:07.025961 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 08:13:07 crc kubenswrapper[4760]: I1125 08:13:07.938082 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:07 crc kubenswrapper[4760]: I1125 08:13:07.938136 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:07 crc kubenswrapper[4760]: I1125 08:13:07.938218 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:07 crc kubenswrapper[4760]: E1125 08:13:07.938216 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:07 crc kubenswrapper[4760]: E1125 08:13:07.938339 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:07 crc kubenswrapper[4760]: E1125 08:13:07.938532 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:08 crc kubenswrapper[4760]: I1125 08:13:08.938056 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:08 crc kubenswrapper[4760]: E1125 08:13:08.938320 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.523674 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/1.log" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.524118 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/0.log" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.524161 4760 generic.go:334] "Generic (PLEG): container finished" podID="29261de0-ae0c-4828-afed-e6036aa367cf" containerID="ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5" exitCode=1 Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.524192 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerDied","Data":"ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5"} Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.524228 4760 scope.go:117] "RemoveContainer" containerID="c41cccbfa5de5abc76c7d4d26ad7fbc276ea2ea0cdfd109a898f1b399365e7ff" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.524960 4760 scope.go:117] "RemoveContainer" containerID="ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5" Nov 25 08:13:09 crc kubenswrapper[4760]: E1125 08:13:09.525196 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-x6n7t_openshift-multus(29261de0-ae0c-4828-afed-e6036aa367cf)\"" pod="openshift-multus/multus-x6n7t" podUID="29261de0-ae0c-4828-afed-e6036aa367cf" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.541591 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fwpkr" podStartSLOduration=98.541576341 podStartE2EDuration="1m38.541576341s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:05.527285571 +0000 UTC m=+119.236316356" watchObservedRunningTime="2025-11-25 08:13:09.541576341 +0000 UTC m=+123.250607136" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.938092 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.938149 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:09 crc kubenswrapper[4760]: I1125 08:13:09.938164 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:09 crc kubenswrapper[4760]: E1125 08:13:09.938289 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:09 crc kubenswrapper[4760]: E1125 08:13:09.938434 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:09 crc kubenswrapper[4760]: E1125 08:13:09.938628 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:10 crc kubenswrapper[4760]: I1125 08:13:10.528509 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/1.log" Nov 25 08:13:10 crc kubenswrapper[4760]: I1125 08:13:10.937836 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:10 crc kubenswrapper[4760]: E1125 08:13:10.937969 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:11 crc kubenswrapper[4760]: I1125 08:13:11.937783 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:11 crc kubenswrapper[4760]: I1125 08:13:11.937811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:11 crc kubenswrapper[4760]: E1125 08:13:11.937919 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:11 crc kubenswrapper[4760]: I1125 08:13:11.937783 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:11 crc kubenswrapper[4760]: E1125 08:13:11.938043 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:11 crc kubenswrapper[4760]: E1125 08:13:11.938210 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:12 crc kubenswrapper[4760]: E1125 08:13:12.028271 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 08:13:12 crc kubenswrapper[4760]: I1125 08:13:12.937986 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:12 crc kubenswrapper[4760]: E1125 08:13:12.938118 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:13 crc kubenswrapper[4760]: I1125 08:13:13.937472 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:13 crc kubenswrapper[4760]: I1125 08:13:13.937494 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:13 crc kubenswrapper[4760]: E1125 08:13:13.937595 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:13 crc kubenswrapper[4760]: I1125 08:13:13.937680 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:13 crc kubenswrapper[4760]: E1125 08:13:13.938061 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:13 crc kubenswrapper[4760]: E1125 08:13:13.938166 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:13 crc kubenswrapper[4760]: I1125 08:13:13.938698 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:13:14 crc kubenswrapper[4760]: I1125 08:13:14.543448 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/3.log" Nov 25 08:13:14 crc kubenswrapper[4760]: I1125 08:13:14.546489 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerStarted","Data":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} Nov 25 08:13:14 crc kubenswrapper[4760]: I1125 08:13:14.547056 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:13:14 crc kubenswrapper[4760]: I1125 08:13:14.577038 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podStartSLOduration=103.577011804 podStartE2EDuration="1m43.577011804s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:14.576180699 +0000 UTC m=+128.285211494" watchObservedRunningTime="2025-11-25 08:13:14.577011804 +0000 UTC m=+128.286042589" Nov 25 08:13:14 crc kubenswrapper[4760]: I1125 08:13:14.690057 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v2qd9"] Nov 25 08:13:14 crc kubenswrapper[4760]: I1125 08:13:14.690275 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:14 crc kubenswrapper[4760]: E1125 08:13:14.690421 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:15 crc kubenswrapper[4760]: I1125 08:13:15.937771 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:15 crc kubenswrapper[4760]: I1125 08:13:15.937852 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:15 crc kubenswrapper[4760]: I1125 08:13:15.937789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:15 crc kubenswrapper[4760]: E1125 08:13:15.937925 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:15 crc kubenswrapper[4760]: E1125 08:13:15.938060 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:15 crc kubenswrapper[4760]: E1125 08:13:15.938109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:15 crc kubenswrapper[4760]: I1125 08:13:15.938159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:15 crc kubenswrapper[4760]: E1125 08:13:15.938215 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:17 crc kubenswrapper[4760]: E1125 08:13:17.028949 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 08:13:17 crc kubenswrapper[4760]: I1125 08:13:17.937922 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:17 crc kubenswrapper[4760]: I1125 08:13:17.937957 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:17 crc kubenswrapper[4760]: I1125 08:13:17.938003 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:17 crc kubenswrapper[4760]: E1125 08:13:17.938113 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:17 crc kubenswrapper[4760]: I1125 08:13:17.938189 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:17 crc kubenswrapper[4760]: E1125 08:13:17.938189 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:17 crc kubenswrapper[4760]: E1125 08:13:17.938379 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:17 crc kubenswrapper[4760]: E1125 08:13:17.938441 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:19 crc kubenswrapper[4760]: I1125 08:13:19.937438 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:19 crc kubenswrapper[4760]: I1125 08:13:19.937462 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:19 crc kubenswrapper[4760]: I1125 08:13:19.937562 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:19 crc kubenswrapper[4760]: I1125 08:13:19.937569 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:19 crc kubenswrapper[4760]: E1125 08:13:19.938405 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:19 crc kubenswrapper[4760]: E1125 08:13:19.938635 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:19 crc kubenswrapper[4760]: E1125 08:13:19.938735 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:19 crc kubenswrapper[4760]: E1125 08:13:19.938868 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:21 crc kubenswrapper[4760]: I1125 08:13:21.938085 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:21 crc kubenswrapper[4760]: E1125 08:13:21.938215 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:21 crc kubenswrapper[4760]: I1125 08:13:21.938082 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:21 crc kubenswrapper[4760]: I1125 08:13:21.938087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:21 crc kubenswrapper[4760]: E1125 08:13:21.938312 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:21 crc kubenswrapper[4760]: I1125 08:13:21.938106 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:21 crc kubenswrapper[4760]: E1125 08:13:21.938392 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:21 crc kubenswrapper[4760]: E1125 08:13:21.938466 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:22 crc kubenswrapper[4760]: E1125 08:13:22.030122 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 08:13:23 crc kubenswrapper[4760]: I1125 08:13:23.938187 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:23 crc kubenswrapper[4760]: I1125 08:13:23.938238 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:23 crc kubenswrapper[4760]: I1125 08:13:23.938370 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:23 crc kubenswrapper[4760]: E1125 08:13:23.938613 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:23 crc kubenswrapper[4760]: I1125 08:13:23.938648 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:23 crc kubenswrapper[4760]: E1125 08:13:23.938749 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:23 crc kubenswrapper[4760]: E1125 08:13:23.938831 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:23 crc kubenswrapper[4760]: E1125 08:13:23.938890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:24 crc kubenswrapper[4760]: I1125 08:13:24.938207 4760 scope.go:117] "RemoveContainer" containerID="ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5" Nov 25 08:13:25 crc kubenswrapper[4760]: I1125 08:13:25.583459 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/1.log" Nov 25 08:13:25 crc kubenswrapper[4760]: I1125 08:13:25.583766 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerStarted","Data":"3e9a8382e6791cdaff72ff69f8e4d9f8d43d278f8f44f38094ed07a4d9a31cfd"} Nov 25 08:13:25 crc kubenswrapper[4760]: I1125 08:13:25.938146 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:25 crc kubenswrapper[4760]: I1125 08:13:25.938191 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:25 crc kubenswrapper[4760]: I1125 08:13:25.938152 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:25 crc kubenswrapper[4760]: E1125 08:13:25.938330 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 08:13:25 crc kubenswrapper[4760]: I1125 08:13:25.938402 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:25 crc kubenswrapper[4760]: E1125 08:13:25.938511 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 08:13:25 crc kubenswrapper[4760]: E1125 08:13:25.938676 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 08:13:25 crc kubenswrapper[4760]: E1125 08:13:25.938817 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v2qd9" podUID="deaf3f00-2bbd-4217-9414-5a6759e72b60" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.937418 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.937431 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.937527 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.937595 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.939386 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.939766 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.939925 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.940226 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.940783 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 08:13:27 crc kubenswrapper[4760]: I1125 08:13:27.940966 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.893829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.894024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:33 crc kubenswrapper[4760]: E1125 08:13:33.894052 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:15:35.894020989 +0000 UTC m=+269.603051784 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.894147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.895281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.900021 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.974097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.995179 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.995279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.999195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:33 crc kubenswrapper[4760]: I1125 08:13:33.999333 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:34 crc kubenswrapper[4760]: W1125 08:13:34.151595 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-35f0e90a6f8660a0873bbfc71d4b03b3ac4f47242dbe24ff74c186ceefec4fdf WatchSource:0}: Error finding container 35f0e90a6f8660a0873bbfc71d4b03b3ac4f47242dbe24ff74c186ceefec4fdf: Status 404 returned error can't find the container with id 35f0e90a6f8660a0873bbfc71d4b03b3ac4f47242dbe24ff74c186ceefec4fdf Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.254483 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.264199 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:34 crc kubenswrapper[4760]: W1125 08:13:34.401930 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3e7b11d926e027f005f6c663259420bf6024d72b48cd65272ad8af19a185aa48 WatchSource:0}: Error finding container 3e7b11d926e027f005f6c663259420bf6024d72b48cd65272ad8af19a185aa48: Status 404 returned error can't find the container with id 3e7b11d926e027f005f6c663259420bf6024d72b48cd65272ad8af19a185aa48 Nov 25 08:13:34 crc kubenswrapper[4760]: W1125 08:13:34.444907 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-2642953ab56412a21db0a90151de432c5aa7c181c9207a8c55c00450b3311270 WatchSource:0}: Error finding container 2642953ab56412a21db0a90151de432c5aa7c181c9207a8c55c00450b3311270: Status 404 returned error can't find the container with id 2642953ab56412a21db0a90151de432c5aa7c181c9207a8c55c00450b3311270 Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.615672 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"110900a1550b3d7a45f5943915741f7973544228328e0737f22344d4dbd505ff"} Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.615716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"35f0e90a6f8660a0873bbfc71d4b03b3ac4f47242dbe24ff74c186ceefec4fdf"} Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.618758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b02389a757a6d18007585de820efe92c73319903220a4ffba0996291acd50c12"} Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.618810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2642953ab56412a21db0a90151de432c5aa7c181c9207a8c55c00450b3311270"} Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.618939 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.620915 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d6fffbb0adc34e985e7bec9b97f5d831c5246019671fac250836690fedcfd359"} Nov 25 08:13:34 crc kubenswrapper[4760]: I1125 08:13:34.620955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3e7b11d926e027f005f6c663259420bf6024d72b48cd65272ad8af19a185aa48"} Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.301082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.340763 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9dz6w"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.341553 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.348813 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.349235 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.349375 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.349679 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.350125 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.350562 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.350798 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.352460 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.353380 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.361362 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.361480 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.362080 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.363317 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-trtpm"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.363785 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6w6bs"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.364201 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.364375 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.364597 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.365076 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.365156 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.365302 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-46x6w"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.365534 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.365709 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.368358 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.368531 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.368540 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-trtj2"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.368964 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bsp8l"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.368982 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.369367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.369662 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.371295 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-s4qrl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.371678 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.371989 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.372406 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.379985 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.384984 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.385028 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.385324 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.385361 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.385498 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.385570 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.385621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.385865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.386710 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.387273 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.387878 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fcw7b"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.388380 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.390671 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-pvjn5"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.391112 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.391434 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.391902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.392292 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pvjn5" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.392545 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.392880 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.393437 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.393805 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.393982 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.406291 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.406398 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.406416 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.406461 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.407768 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.416117 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.416586 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.420208 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.420439 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.420225 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.420885 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.420895 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.420963 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421015 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421038 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421229 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421311 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421421 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421515 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421516 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.421747 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.422213 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.422342 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.459290 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.459519 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.459732 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.461447 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.462239 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.463313 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.463536 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.463648 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.463793 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.463920 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.464089 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.464227 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.464358 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.464519 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.464746 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.464871 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.465576 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.465762 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.465870 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.466007 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.466285 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.466442 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.466546 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.468872 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.469181 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.469416 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.469675 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.469830 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.469937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.470094 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.470316 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.470325 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.470491 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.470545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.472666 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.475753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.475905 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.476275 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.476391 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.477349 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.478478 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.478717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.478882 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.479036 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.479221 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.479400 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.479702 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.481506 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jw9hf"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.482175 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.493633 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.493801 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.493885 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.493973 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.494389 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.496081 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.498977 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.500395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.505431 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jsphj"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.505931 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.506396 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.506843 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.507489 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.507814 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.508004 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.508006 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.508758 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.509214 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vttfl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.509635 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.510199 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.510324 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.510493 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.511387 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.512037 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.512441 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.512563 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.514064 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.516883 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xbx5c"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.520235 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.520676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-image-import-ca\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.520850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1556c569-2bfa-4b43-ac95-468f72dbcb94-serving-cert\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.520899 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.520958 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04581d94-b273-433b-a481-aa41acb8dbd4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.520999 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521031 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0bfdc5b-7be8-4072-a9fc-342231fefc83-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521076 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/930152c4-9e5c-47e6-8c3b-46678c063e8f-node-pullsecrets\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521464 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc0deb-c42c-40bb-b313-44957ed5b688-auth-proxy-config\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521510 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73ee8ff0-97e5-4ce1-aba3-110933546bab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521541 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxstb\" (UniqueName: \"kubernetes.io/projected/8cd6819e-5f95-4734-90ef-484b3362a7c9-kube-api-access-wxstb\") pod \"cluster-samples-operator-665b6dd947-k6cm2\" (UID: \"8cd6819e-5f95-4734-90ef-484b3362a7c9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffafdad-e326-4d95-8733-e5b5b2197ad9-config\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-client-ca\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521717 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73ee8ff0-97e5-4ce1-aba3-110933546bab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521751 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-audit-policies\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521802 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc0deb-c42c-40bb-b313-44957ed5b688-config\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdxm\" (UniqueName: \"kubernetes.io/projected/930152c4-9e5c-47e6-8c3b-46678c063e8f-kube-api-access-4rdxm\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cwk\" (UniqueName: \"kubernetes.io/projected/1ffafdad-e326-4d95-8733-e5b5b2197ad9-kube-api-access-r7cwk\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc0deb-c42c-40bb-b313-44957ed5b688-machine-approver-tls\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-config\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.521988 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-serving-cert\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.524650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8ee6c92-f652-4da5-8291-f3fedd05be84-audit-dir\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.524864 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6081bf3c-671c-46d5-8fbf-df633064cbe7-serving-cert\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.524881 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.524890 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ffafdad-e326-4d95-8733-e5b5b2197ad9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525106 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-dir\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-encryption-config\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfsm\" (UniqueName: \"kubernetes.io/projected/6081bf3c-671c-46d5-8fbf-df633064cbe7-kube-api-access-5mfsm\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-config\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525603 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525622 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525646 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ee8ff0-97e5-4ce1-aba3-110933546bab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bfdc5b-7be8-4072-a9fc-342231fefc83-config\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525685 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-config\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525708 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-encryption-config\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-audit\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525745 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-etcd-client\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525812 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbg4w\" (UniqueName: \"kubernetes.io/projected/73ee8ff0-97e5-4ce1-aba3-110933546bab-kube-api-access-dbg4w\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525831 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6vm\" (UniqueName: \"kubernetes.io/projected/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-kube-api-access-wp6vm\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525859 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bfdc5b-7be8-4072-a9fc-342231fefc83-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525881 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-etcd-client\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525924 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525951 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.525972 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/930152c4-9e5c-47e6-8c3b-46678c063e8f-audit-dir\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2jr\" (UniqueName: \"kubernetes.io/projected/1556c569-2bfa-4b43-ac95-468f72dbcb94-kube-api-access-rd2jr\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526062 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1556c569-2bfa-4b43-ac95-468f72dbcb94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526108 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdz6\" (UniqueName: \"kubernetes.io/projected/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-kube-api-access-qgdz6\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526136 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjdfn\" (UniqueName: \"kubernetes.io/projected/4cbc0deb-c42c-40bb-b313-44957ed5b688-kube-api-access-wjdfn\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526158 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a65eb-f881-42b1-a499-9dd15265f638-serving-cert\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526216 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ffafdad-e326-4d95-8733-e5b5b2197ad9-images\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526235 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526265 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526458 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526626 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-policies\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04581d94-b273-433b-a481-aa41acb8dbd4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.526806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.527515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndw88\" (UniqueName: \"kubernetes.io/projected/773a65eb-f881-42b1-a499-9dd15265f638-kube-api-access-ndw88\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.527545 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04581d94-b273-433b-a481-aa41acb8dbd4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.527638 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x959k\" (UniqueName: \"kubernetes.io/projected/a8ee6c92-f652-4da5-8291-f3fedd05be84-kube-api-access-x959k\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.527672 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-serving-cert\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.527694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cd6819e-5f95-4734-90ef-484b3362a7c9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-k6cm2\" (UID: \"8cd6819e-5f95-4734-90ef-484b3362a7c9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.527740 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-client-ca\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.530076 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.530358 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.530388 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.530693 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.531009 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.531243 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.533030 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hrhbl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.534085 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.534534 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.534625 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.534905 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.535650 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.536214 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-km6r5"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.536697 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.538298 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.538684 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.540291 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.540787 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2m676"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.541766 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.543006 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.543784 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9dz6w"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.543812 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6w6bs"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.543872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.545180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-trtpm"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.545939 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.546092 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.547318 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-46x6w"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.548336 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.550474 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.550704 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bsp8l"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.552607 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.552815 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.553892 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.554945 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9g4f5"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.555665 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.555970 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.557979 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.559724 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.563332 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.564911 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-trtj2"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.567437 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.569807 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pvjn5"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.577481 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vttfl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.581614 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jsphj"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.583489 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.584573 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-s4qrl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.586044 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.587485 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fcw7b"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.588953 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.590623 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.594208 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.597506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.603287 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xbx5c"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.603773 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.608219 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.609374 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.610685 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.611969 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.613420 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2m676"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.614484 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.615635 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h8svr"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.618096 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gpwlx"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.618334 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.618579 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-km6r5"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.618660 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.619648 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hrhbl"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.623239 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.623445 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.625469 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gpwlx"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.627020 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h8svr"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.628503 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rml2b"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.629664 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.629911 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rml2b"] Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04581d94-b273-433b-a481-aa41acb8dbd4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631526 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-image-import-ca\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1556c569-2bfa-4b43-ac95-468f72dbcb94-serving-cert\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631607 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0bfdc5b-7be8-4072-a9fc-342231fefc83-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631659 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc0deb-c42c-40bb-b313-44957ed5b688-auth-proxy-config\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631681 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/930152c4-9e5c-47e6-8c3b-46678c063e8f-node-pullsecrets\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73ee8ff0-97e5-4ce1-aba3-110933546bab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxstb\" (UniqueName: \"kubernetes.io/projected/8cd6819e-5f95-4734-90ef-484b3362a7c9-kube-api-access-wxstb\") pod \"cluster-samples-operator-665b6dd947-k6cm2\" (UID: \"8cd6819e-5f95-4734-90ef-484b3362a7c9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631804 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffafdad-e326-4d95-8733-e5b5b2197ad9-config\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631824 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631848 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-client-ca\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631870 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73ee8ff0-97e5-4ce1-aba3-110933546bab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-audit-policies\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631916 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631938 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cwk\" (UniqueName: \"kubernetes.io/projected/1ffafdad-e326-4d95-8733-e5b5b2197ad9-kube-api-access-r7cwk\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc0deb-c42c-40bb-b313-44957ed5b688-config\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.631984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rdxm\" (UniqueName: \"kubernetes.io/projected/930152c4-9e5c-47e6-8c3b-46678c063e8f-kube-api-access-4rdxm\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc0deb-c42c-40bb-b313-44957ed5b688-machine-approver-tls\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8ee6c92-f652-4da5-8291-f3fedd05be84-audit-dir\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632056 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-config\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-serving-cert\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ffafdad-e326-4d95-8733-e5b5b2197ad9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6081bf3c-671c-46d5-8fbf-df633064cbe7-serving-cert\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-dir\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632196 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-encryption-config\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mfsm\" (UniqueName: \"kubernetes.io/projected/6081bf3c-671c-46d5-8fbf-df633064cbe7-kube-api-access-5mfsm\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-config\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ee8ff0-97e5-4ce1-aba3-110933546bab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-encryption-config\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bfdc5b-7be8-4072-a9fc-342231fefc83-config\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632713 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-config\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp6vm\" (UniqueName: \"kubernetes.io/projected/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-kube-api-access-wp6vm\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632756 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-audit\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-image-import-ca\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-etcd-client\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632796 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbg4w\" (UniqueName: \"kubernetes.io/projected/73ee8ff0-97e5-4ce1-aba3-110933546bab-kube-api-access-dbg4w\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632821 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bfdc5b-7be8-4072-a9fc-342231fefc83-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632916 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-etcd-client\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632968 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/930152c4-9e5c-47e6-8c3b-46678c063e8f-audit-dir\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.632990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2jr\" (UniqueName: \"kubernetes.io/projected/1556c569-2bfa-4b43-ac95-468f72dbcb94-kube-api-access-rd2jr\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1556c569-2bfa-4b43-ac95-468f72dbcb94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdz6\" (UniqueName: \"kubernetes.io/projected/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-kube-api-access-qgdz6\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ffafdad-e326-4d95-8733-e5b5b2197ad9-images\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjdfn\" (UniqueName: \"kubernetes.io/projected/4cbc0deb-c42c-40bb-b313-44957ed5b688-kube-api-access-wjdfn\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a65eb-f881-42b1-a499-9dd15265f638-serving-cert\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-policies\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633237 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndw88\" (UniqueName: \"kubernetes.io/projected/773a65eb-f881-42b1-a499-9dd15265f638-kube-api-access-ndw88\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04581d94-b273-433b-a481-aa41acb8dbd4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633334 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04581d94-b273-433b-a481-aa41acb8dbd4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633336 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffafdad-e326-4d95-8733-e5b5b2197ad9-config\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x959k\" (UniqueName: \"kubernetes.io/projected/a8ee6c92-f652-4da5-8291-f3fedd05be84-kube-api-access-x959k\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-serving-cert\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633401 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cd6819e-5f95-4734-90ef-484b3362a7c9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-k6cm2\" (UID: \"8cd6819e-5f95-4734-90ef-484b3362a7c9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633432 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-client-ca\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.633992 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.634033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/930152c4-9e5c-47e6-8c3b-46678c063e8f-node-pullsecrets\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.634112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-dir\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.634149 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8ee6c92-f652-4da5-8291-f3fedd05be84-audit-dir\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.634728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-client-ca\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.634944 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-config\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.635203 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/930152c4-9e5c-47e6-8c3b-46678c063e8f-audit-dir\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.635796 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bfdc5b-7be8-4072-a9fc-342231fefc83-config\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.636133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.636277 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.636442 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.636843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1556c569-2bfa-4b43-ac95-468f72dbcb94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.637004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.637222 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1ffafdad-e326-4d95-8733-e5b5b2197ad9-images\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.637890 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-serving-cert\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.638079 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-policies\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.638083 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73ee8ff0-97e5-4ce1-aba3-110933546bab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.638145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ffafdad-e326-4d95-8733-e5b5b2197ad9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.638710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-etcd-client\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.639742 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.639902 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-audit\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.640622 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.640723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1556c569-2bfa-4b43-ac95-468f72dbcb94-serving-cert\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.641418 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.644136 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.644506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.644673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-config\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.645334 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-config\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.645506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-audit-policies\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.645830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a65eb-f881-42b1-a499-9dd15265f638-serving-cert\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.645904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cbc0deb-c42c-40bb-b313-44957ed5b688-auth-proxy-config\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.645950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73ee8ff0-97e5-4ce1-aba3-110933546bab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.646141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbc0deb-c42c-40bb-b313-44957ed5b688-config\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.646712 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6081bf3c-671c-46d5-8fbf-df633064cbe7-serving-cert\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.647998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4cbc0deb-c42c-40bb-b313-44957ed5b688-machine-approver-tls\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.648636 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04581d94-b273-433b-a481-aa41acb8dbd4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.650542 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8ee6c92-f652-4da5-8291-f3fedd05be84-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.650614 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-client-ca\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.651439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/930152c4-9e5c-47e6-8c3b-46678c063e8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.651664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8ee6c92-f652-4da5-8291-f3fedd05be84-encryption-config\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.651904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bfdc5b-7be8-4072-a9fc-342231fefc83-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.651944 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-encryption-config\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.651985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.652006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-etcd-client\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.652104 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.652118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cd6819e-5f95-4734-90ef-484b3362a7c9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-k6cm2\" (UID: \"8cd6819e-5f95-4734-90ef-484b3362a7c9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.652443 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.652563 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.654280 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.656752 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930152c4-9e5c-47e6-8c3b-46678c063e8f-serving-cert\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.657132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.664772 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.676789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04581d94-b273-433b-a481-aa41acb8dbd4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.684019 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.701145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.704557 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.743192 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.763711 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.807488 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.823570 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.844422 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.864414 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.883576 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.903414 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.923172 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.944531 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.964298 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 08:13:35 crc kubenswrapper[4760]: I1125 08:13:35.983938 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.004843 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.024671 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.043170 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.064300 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.084092 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.105148 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.124007 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.144342 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.165022 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.183894 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.204161 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.224190 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.244088 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.264116 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.285338 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.304484 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.323657 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.344283 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.363720 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.384055 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.406473 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.424551 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.443799 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.464107 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.484649 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.504612 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.522128 4760 request.go:700] Waited for 1.00564588s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.523889 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.544438 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.564668 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.585240 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.603846 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.623433 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.644480 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.664114 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.685642 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.703726 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.723876 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.743880 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.764440 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.784293 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.804202 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.823432 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.843812 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.863852 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.884449 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.904052 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.924697 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.954330 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.964173 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 08:13:36 crc kubenswrapper[4760]: I1125 08:13:36.984157 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.003966 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.023672 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.043435 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.064740 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.084181 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.103392 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.124016 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.143513 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.163773 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.183372 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.203683 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.223890 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.243457 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.263769 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.284613 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.303985 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.324756 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.343900 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.364608 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.384317 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.419029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxstb\" (UniqueName: \"kubernetes.io/projected/8cd6819e-5f95-4734-90ef-484b3362a7c9-kube-api-access-wxstb\") pod \"cluster-samples-operator-665b6dd947-k6cm2\" (UID: \"8cd6819e-5f95-4734-90ef-484b3362a7c9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.438496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cwk\" (UniqueName: \"kubernetes.io/projected/1ffafdad-e326-4d95-8733-e5b5b2197ad9-kube-api-access-r7cwk\") pod \"machine-api-operator-5694c8668f-6w6bs\" (UID: \"1ffafdad-e326-4d95-8733-e5b5b2197ad9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.462016 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73ee8ff0-97e5-4ce1-aba3-110933546bab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.478019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0bfdc5b-7be8-4072-a9fc-342231fefc83-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-g8jn9\" (UID: \"c0bfdc5b-7be8-4072-a9fc-342231fefc83\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.497993 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mfsm\" (UniqueName: \"kubernetes.io/projected/6081bf3c-671c-46d5-8fbf-df633064cbe7-kube-api-access-5mfsm\") pod \"controller-manager-879f6c89f-trtpm\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.517006 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.517316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rdxm\" (UniqueName: \"kubernetes.io/projected/930152c4-9e5c-47e6-8c3b-46678c063e8f-kube-api-access-4rdxm\") pod \"apiserver-76f77b778f-9dz6w\" (UID: \"930152c4-9e5c-47e6-8c3b-46678c063e8f\") " pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.522489 4760 request.go:700] Waited for 1.887461113s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.537646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.558745 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbg4w\" (UniqueName: \"kubernetes.io/projected/73ee8ff0-97e5-4ce1-aba3-110933546bab-kube-api-access-dbg4w\") pod \"ingress-operator-5b745b69d9-xvqpn\" (UID: \"73ee8ff0-97e5-4ce1-aba3-110933546bab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.585652 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.587674 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2jr\" (UniqueName: \"kubernetes.io/projected/1556c569-2bfa-4b43-ac95-468f72dbcb94-kube-api-access-rd2jr\") pod \"openshift-config-operator-7777fb866f-gl5fp\" (UID: \"1556c569-2bfa-4b43-ac95-468f72dbcb94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.603142 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp6vm\" (UniqueName: \"kubernetes.io/projected/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-kube-api-access-wp6vm\") pod \"oauth-openshift-558db77b4-bsp8l\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.614478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.618386 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjdfn\" (UniqueName: \"kubernetes.io/projected/4cbc0deb-c42c-40bb-b313-44957ed5b688-kube-api-access-wjdfn\") pod \"machine-approver-56656f9798-wvtqd\" (UID: \"4cbc0deb-c42c-40bb-b313-44957ed5b688\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.639774 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdz6\" (UniqueName: \"kubernetes.io/projected/c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25-kube-api-access-qgdz6\") pod \"cluster-image-registry-operator-dc59b4c8b-k4jd9\" (UID: \"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.643449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.652949 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.658218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndw88\" (UniqueName: \"kubernetes.io/projected/773a65eb-f881-42b1-a499-9dd15265f638-kube-api-access-ndw88\") pod \"route-controller-manager-6576b87f9c-tss44\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.677041 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.678360 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-trtpm"] Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.680437 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04581d94-b273-433b-a481-aa41acb8dbd4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-66bd7\" (UID: \"04581d94-b273-433b-a481-aa41acb8dbd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.684131 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" Nov 25 08:13:37 crc kubenswrapper[4760]: W1125 08:13:37.691837 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6081bf3c_671c_46d5_8fbf_df633064cbe7.slice/crio-9e39f0769d491a8afb3f18c4fcd849ccee93161d6e625cbb71fe19ecab608a1d WatchSource:0}: Error finding container 9e39f0769d491a8afb3f18c4fcd849ccee93161d6e625cbb71fe19ecab608a1d: Status 404 returned error can't find the container with id 9e39f0769d491a8afb3f18c4fcd849ccee93161d6e625cbb71fe19ecab608a1d Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.698272 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.709443 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.714272 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x959k\" (UniqueName: \"kubernetes.io/projected/a8ee6c92-f652-4da5-8291-f3fedd05be84-kube-api-access-x959k\") pod \"apiserver-7bbb656c7d-68hpd\" (UID: \"a8ee6c92-f652-4da5-8291-f3fedd05be84\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.721727 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.760710 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2"] Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765394 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3191076e-9c9c-4e4a-923f-3189e4414342-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765455 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-service-ca\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765526 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-config\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-trusted-ca-bundle\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765587 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggn8f\" (UniqueName: \"kubernetes.io/projected/992a574c-b7d7-467f-be0a-98be57052cb6-kube-api-access-ggn8f\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765622 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-console-config\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-oauth-serving-cert\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765658 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/992a574c-b7d7-467f-be0a-98be57052cb6-serving-cert\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-242hs\" (UniqueName: \"kubernetes.io/projected/f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945-kube-api-access-242hs\") pod \"downloads-7954f5f757-pvjn5\" (UID: \"f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945\") " pod="openshift-console/downloads-7954f5f757-pvjn5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765708 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/584213d2-6225-4cab-b558-22d0b9990cd8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3191076e-9c9c-4e4a-923f-3189e4414342-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765741 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6qtt\" (UniqueName: \"kubernetes.io/projected/3191076e-9c9c-4e4a-923f-3189e4414342-kube-api-access-s6qtt\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765760 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-registry-tls\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-trusted-ca\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765796 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-registry-certificates\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765816 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm974\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-kube-api-access-cm974\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765842 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/992a574c-b7d7-467f-be0a-98be57052cb6-config\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765874 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-serving-cert\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765894 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-oauth-config\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765916 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/992a574c-b7d7-467f-be0a-98be57052cb6-trusted-ca\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765934 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/584213d2-6225-4cab-b558-22d0b9990cd8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765949 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgh6\" (UniqueName: \"kubernetes.io/projected/916b7590-b541-4ca9-b432-861731b7ae94-kube-api-access-bwgh6\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.765994 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.766017 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-serving-cert\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.766034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ddl\" (UniqueName: \"kubernetes.io/projected/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-kube-api-access-z6ddl\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.766054 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-bound-sa-token\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.766093 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-service-ca-bundle\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: E1125 08:13:37.766552 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.266533001 +0000 UTC m=+151.975563836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.774332 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.790462 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.860719 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bsp8l"] Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867308 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867455 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdd8c\" (UniqueName: \"kubernetes.io/projected/a2d1bd43-b1f2-45bf-abfc-9e43609ee07f-kube-api-access-vdd8c\") pod \"package-server-manager-789f6589d5-tdrjl\" (UID: \"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867475 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ed67ec-0477-4a5b-8a35-e857d183ed53-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867496 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-bound-sa-token\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e04c1c07-99b1-4354-8f39-a16776c388aa-tmpfs\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867534 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e04c1c07-99b1-4354-8f39-a16776c388aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdqd\" (UniqueName: \"kubernetes.io/projected/5d0f456a-ead3-4fc9-8532-46d629ebb86a-kube-api-access-kwdqd\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-service-ca\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9d4d34a-f5d6-425f-bb81-bad575d7178c-proxy-tls\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867677 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3191076e-9c9c-4e4a-923f-3189e4414342-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpll\" (UniqueName: \"kubernetes.io/projected/1242c727-9313-453b-a4a4-899623f2413d-kube-api-access-fdpll\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867717 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfmt\" (UniqueName: \"kubernetes.io/projected/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-kube-api-access-wbfmt\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1bd43-b1f2-45bf-abfc-9e43609ee07f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tdrjl\" (UID: \"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-config\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-trusted-ca-bundle\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggn8f\" (UniqueName: \"kubernetes.io/projected/992a574c-b7d7-467f-be0a-98be57052cb6-kube-api-access-ggn8f\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1242c727-9313-453b-a4a4-899623f2413d-node-bootstrap-token\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867821 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64ce0c87-2515-445e-ad80-e95bae36bfd0-srv-cert\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.867896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-socket-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.869690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsqf\" (UniqueName: \"kubernetes.io/projected/e04c1c07-99b1-4354-8f39-a16776c388aa-kube-api-access-5vsqf\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.869724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23229ef-e215-4e9f-a8e0-d38be72aef90-config-volume\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:37 crc kubenswrapper[4760]: E1125 08:13:37.869897 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.369872924 +0000 UTC m=+152.078903719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.869987 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rffw\" (UniqueName: \"kubernetes.io/projected/3acc0e9c-36be-4834-8450-d68aec396f24-kube-api-access-9rffw\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf8bv\" (UID: \"3acc0e9c-36be-4834-8450-d68aec396f24\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870019 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed67ec-0477-4a5b-8a35-e857d183ed53-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870073 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-config\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870332 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-mountpoint-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870374 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/436b3b5b-76e0-416d-8f55-de0bb312f46d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-trusted-ca-bundle\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870437 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3191076e-9c9c-4e4a-923f-3189e4414342-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870472 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/992a574c-b7d7-467f-be0a-98be57052cb6-serving-cert\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbwk\" (UniqueName: \"kubernetes.io/projected/d30f6634-bff6-4b14-a07c-752377452b53-kube-api-access-4lbwk\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4358264-0e5b-4c15-b34b-c65740995ec0-cert\") pod \"ingress-canary-gpwlx\" (UID: \"a4358264-0e5b-4c15-b34b-c65740995ec0\") " pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870818 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-242hs\" (UniqueName: \"kubernetes.io/projected/f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945-kube-api-access-242hs\") pod \"downloads-7954f5f757-pvjn5\" (UID: \"f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945\") " pod="openshift-console/downloads-7954f5f757-pvjn5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.870866 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3acc0e9c-36be-4834-8450-d68aec396f24-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf8bv\" (UID: \"3acc0e9c-36be-4834-8450-d68aec396f24\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3191076e-9c9c-4e4a-923f-3189e4414342-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871055 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-config\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871181 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6qtt\" (UniqueName: \"kubernetes.io/projected/3191076e-9c9c-4e4a-923f-3189e4414342-kube-api-access-s6qtt\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871225 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-serving-cert\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871270 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871555 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-registry-tls\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871893 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64ce0c87-2515-445e-ad80-e95bae36bfd0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm974\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-kube-api-access-cm974\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.871994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d15298c9-07f3-469c-a03d-007cc07146e1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jsphj\" (UID: \"d15298c9-07f3-469c-a03d-007cc07146e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-serving-cert\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872196 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e04c1c07-99b1-4354-8f39-a16776c388aa-webhook-cert\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872343 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-client\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872404 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/584213d2-6225-4cab-b558-22d0b9990cd8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwgh6\" (UniqueName: \"kubernetes.io/projected/916b7590-b541-4ca9-b432-861731b7ae94-kube-api-access-bwgh6\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872495 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872523 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436b3b5b-76e0-416d-8f55-de0bb312f46d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-serving-cert\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6ddl\" (UniqueName: \"kubernetes.io/projected/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-kube-api-access-z6ddl\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-signing-cabundle\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.872655 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldp8l\" (UniqueName: \"kubernetes.io/projected/35250086-d3b8-4f83-a232-aba1a9d09bb2-kube-api-access-ldp8l\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: E1125 08:13:37.873355 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.373341626 +0000 UTC m=+152.082372411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.873384 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898w6\" (UniqueName: \"kubernetes.io/projected/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-kube-api-access-898w6\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.873406 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a23229ef-e215-4e9f-a8e0-d38be72aef90-secret-volume\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.873430 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-service-ca-bundle\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.873717 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/584213d2-6225-4cab-b558-22d0b9990cd8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.873935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-config\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.873999 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e42451d2-d417-420f-b109-845278870cfb-srv-cert\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.874053 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08265d42-a708-4bf9-9e5c-a791becc2aa5-metrics-tls\") pod \"dns-operator-744455d44c-xbx5c\" (UID: \"08265d42-a708-4bf9-9e5c-a791becc2aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.874103 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0f456a-ead3-4fc9-8532-46d629ebb86a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.874161 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-registration-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.874185 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.874545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-service-ca\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.874630 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.874695 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-service-ca-bundle\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.875982 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d30f6634-bff6-4b14-a07c-752377452b53-metrics-tls\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876011 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-default-certificate\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876036 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-serving-cert\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876077 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh8bj\" (UniqueName: \"kubernetes.io/projected/64ce0c87-2515-445e-ad80-e95bae36bfd0-kube-api-access-vh8bj\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq4lb\" (UniqueName: \"kubernetes.io/projected/08265d42-a708-4bf9-9e5c-a791becc2aa5-kube-api-access-mq4lb\") pod \"dns-operator-744455d44c-xbx5c\" (UID: \"08265d42-a708-4bf9-9e5c-a791becc2aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876135 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-stats-auth\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-console-config\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876258 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-oauth-serving-cert\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876281 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvpt\" (UniqueName: \"kubernetes.io/projected/e42451d2-d417-420f-b109-845278870cfb-kube-api-access-8wvpt\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876303 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-plugins-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876326 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfrdn\" (UniqueName: \"kubernetes.io/projected/7b95137c-8f1b-4e15-8ae2-4c6192118119-kube-api-access-hfrdn\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4kq\" (UniqueName: \"kubernetes.io/projected/a23229ef-e215-4e9f-a8e0-d38be72aef90-kube-api-access-6h4kq\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876383 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9f4j\" (UniqueName: \"kubernetes.io/projected/03dbd329-8b62-4fd5-8cfe-87c495680e02-kube-api-access-q9f4j\") pod \"migrator-59844c95c7-grp6l\" (UID: \"03dbd329-8b62-4fd5-8cfe-87c495680e02\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876408 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv88g\" (UniqueName: \"kubernetes.io/projected/f9d4d34a-f5d6-425f-bb81-bad575d7178c-kube-api-access-zv88g\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876430 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-proxy-tls\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876462 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2xn\" (UniqueName: \"kubernetes.io/projected/e5ed67ec-0477-4a5b-8a35-e857d183ed53-kube-api-access-6j2xn\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876489 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-service-ca\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876505 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/584213d2-6225-4cab-b558-22d0b9990cd8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876530 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436b3b5b-76e0-416d-8f55-de0bb312f46d-config\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wgpm\" (UniqueName: \"kubernetes.io/projected/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-kube-api-access-6wgpm\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876580 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9d4d34a-f5d6-425f-bb81-bad575d7178c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-trusted-ca\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876633 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-images\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876657 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-csi-data-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876680 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrr4x\" (UniqueName: \"kubernetes.io/projected/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-kube-api-access-wrr4x\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876705 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-registry-certificates\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1242c727-9313-453b-a4a4-899623f2413d-certs\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/992a574c-b7d7-467f-be0a-98be57052cb6-config\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876838 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d30f6634-bff6-4b14-a07c-752377452b53-config-volume\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876879 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb7pj\" (UniqueName: \"kubernetes.io/projected/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-kube-api-access-hb7pj\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876929 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b95137c-8f1b-4e15-8ae2-4c6192118119-service-ca-bundle\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-oauth-config\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.876994 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/992a574c-b7d7-467f-be0a-98be57052cb6-trusted-ca\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877014 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-metrics-certs\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d0f456a-ead3-4fc9-8532-46d629ebb86a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877470 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg92x\" (UniqueName: \"kubernetes.io/projected/d15298c9-07f3-469c-a03d-007cc07146e1-kube-api-access-hg92x\") pod \"multus-admission-controller-857f4d67dd-jsphj\" (UID: \"d15298c9-07f3-469c-a03d-007cc07146e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-ca\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877537 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52mh\" (UniqueName: \"kubernetes.io/projected/a4358264-0e5b-4c15-b34b-c65740995ec0-kube-api-access-m52mh\") pod \"ingress-canary-gpwlx\" (UID: \"a4358264-0e5b-4c15-b34b-c65740995ec0\") " pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-signing-key\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877584 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e42451d2-d417-420f-b109-845278870cfb-profile-collector-cert\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877901 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-oauth-serving-cert\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.877471 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-console-config\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.885692 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/992a574c-b7d7-467f-be0a-98be57052cb6-serving-cert\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.885695 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-registry-tls\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.885880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/992a574c-b7d7-467f-be0a-98be57052cb6-config\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.886077 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3191076e-9c9c-4e4a-923f-3189e4414342-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.886675 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-trusted-ca\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.886688 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-registry-certificates\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.887683 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/992a574c-b7d7-467f-be0a-98be57052cb6-trusted-ca\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.897164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-serving-cert\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.904182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-serving-cert\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.906195 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.908840 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-oauth-config\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.909942 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/584213d2-6225-4cab-b558-22d0b9990cd8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: W1125 08:13:37.911531 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fce3bec_6d01_47d6_aa9e_ca61f62921c8.slice/crio-1a71cb68f18b4aedf5744fd67fce57602e1d49824a65249552de2a32db401d39 WatchSource:0}: Error finding container 1a71cb68f18b4aedf5744fd67fce57602e1d49824a65249552de2a32db401d39: Status 404 returned error can't find the container with id 1a71cb68f18b4aedf5744fd67fce57602e1d49824a65249552de2a32db401d39 Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.924077 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-bound-sa-token\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.926536 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9"] Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.949411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggn8f\" (UniqueName: \"kubernetes.io/projected/992a574c-b7d7-467f-be0a-98be57052cb6-kube-api-access-ggn8f\") pod \"console-operator-58897d9998-46x6w\" (UID: \"992a574c-b7d7-467f-be0a-98be57052cb6\") " pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.960033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-242hs\" (UniqueName: \"kubernetes.io/projected/f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945-kube-api-access-242hs\") pod \"downloads-7954f5f757-pvjn5\" (UID: \"f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945\") " pod="openshift-console/downloads-7954f5f757-pvjn5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978618 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-serving-cert\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978800 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978818 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64ce0c87-2515-445e-ad80-e95bae36bfd0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d15298c9-07f3-469c-a03d-007cc07146e1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jsphj\" (UID: \"d15298c9-07f3-469c-a03d-007cc07146e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e04c1c07-99b1-4354-8f39-a16776c388aa-webhook-cert\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-client\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436b3b5b-76e0-416d-8f55-de0bb312f46d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-signing-cabundle\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldp8l\" (UniqueName: \"kubernetes.io/projected/35250086-d3b8-4f83-a232-aba1a9d09bb2-kube-api-access-ldp8l\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898w6\" (UniqueName: \"kubernetes.io/projected/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-kube-api-access-898w6\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a23229ef-e215-4e9f-a8e0-d38be72aef90-secret-volume\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.978988 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-config\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979003 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e42451d2-d417-420f-b109-845278870cfb-srv-cert\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08265d42-a708-4bf9-9e5c-a791becc2aa5-metrics-tls\") pod \"dns-operator-744455d44c-xbx5c\" (UID: \"08265d42-a708-4bf9-9e5c-a791becc2aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0f456a-ead3-4fc9-8532-46d629ebb86a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979048 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-registration-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979066 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979085 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d30f6634-bff6-4b14-a07c-752377452b53-metrics-tls\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979100 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-default-certificate\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-serving-cert\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979138 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh8bj\" (UniqueName: \"kubernetes.io/projected/64ce0c87-2515-445e-ad80-e95bae36bfd0-kube-api-access-vh8bj\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4lb\" (UniqueName: \"kubernetes.io/projected/08265d42-a708-4bf9-9e5c-a791becc2aa5-kube-api-access-mq4lb\") pod \"dns-operator-744455d44c-xbx5c\" (UID: \"08265d42-a708-4bf9-9e5c-a791becc2aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979170 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-stats-auth\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wvpt\" (UniqueName: \"kubernetes.io/projected/e42451d2-d417-420f-b109-845278870cfb-kube-api-access-8wvpt\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-plugins-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979229 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfrdn\" (UniqueName: \"kubernetes.io/projected/7b95137c-8f1b-4e15-8ae2-4c6192118119-kube-api-access-hfrdn\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h4kq\" (UniqueName: \"kubernetes.io/projected/a23229ef-e215-4e9f-a8e0-d38be72aef90-kube-api-access-6h4kq\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979293 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9f4j\" (UniqueName: \"kubernetes.io/projected/03dbd329-8b62-4fd5-8cfe-87c495680e02-kube-api-access-q9f4j\") pod \"migrator-59844c95c7-grp6l\" (UID: \"03dbd329-8b62-4fd5-8cfe-87c495680e02\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv88g\" (UniqueName: \"kubernetes.io/projected/f9d4d34a-f5d6-425f-bb81-bad575d7178c-kube-api-access-zv88g\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979325 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-proxy-tls\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979340 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2xn\" (UniqueName: \"kubernetes.io/projected/e5ed67ec-0477-4a5b-8a35-e857d183ed53-kube-api-access-6j2xn\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979356 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436b3b5b-76e0-416d-8f55-de0bb312f46d-config\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wgpm\" (UniqueName: \"kubernetes.io/projected/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-kube-api-access-6wgpm\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979386 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9d4d34a-f5d6-425f-bb81-bad575d7178c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.979406 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-images\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: E1125 08:13:37.979626 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.479609376 +0000 UTC m=+152.188640171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.980219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-config\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.980312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/436b3b5b-76e0-416d-8f55-de0bb312f46d-config\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.980459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-registration-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981101 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6qtt\" (UniqueName: \"kubernetes.io/projected/3191076e-9c9c-4e4a-923f-3189e4414342-kube-api-access-s6qtt\") pod \"openshift-controller-manager-operator-756b6f6bc6-m2vzk\" (UID: \"3191076e-9c9c-4e4a-923f-3189e4414342\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981569 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-images\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-csi-data-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981617 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrr4x\" (UniqueName: \"kubernetes.io/projected/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-kube-api-access-wrr4x\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1242c727-9313-453b-a4a4-899623f2413d-certs\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d30f6634-bff6-4b14-a07c-752377452b53-config-volume\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb7pj\" (UniqueName: \"kubernetes.io/projected/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-kube-api-access-hb7pj\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b95137c-8f1b-4e15-8ae2-4c6192118119-service-ca-bundle\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-metrics-certs\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d0f456a-ead3-4fc9-8532-46d629ebb86a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981794 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg92x\" (UniqueName: \"kubernetes.io/projected/d15298c9-07f3-469c-a03d-007cc07146e1-kube-api-access-hg92x\") pod \"multus-admission-controller-857f4d67dd-jsphj\" (UID: \"d15298c9-07f3-469c-a03d-007cc07146e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-ca\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.981929 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.982600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f9d4d34a-f5d6-425f-bb81-bad575d7178c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.982992 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-csi-data-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.983192 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-signing-cabundle\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.985996 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-plugins-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.986790 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d0f456a-ead3-4fc9-8532-46d629ebb86a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.986928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d30f6634-bff6-4b14-a07c-752377452b53-config-volume\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.993580 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-ca\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m52mh\" (UniqueName: \"kubernetes.io/projected/a4358264-0e5b-4c15-b34b-c65740995ec0-kube-api-access-m52mh\") pod \"ingress-canary-gpwlx\" (UID: \"a4358264-0e5b-4c15-b34b-c65740995ec0\") " pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e42451d2-d417-420f-b109-845278870cfb-profile-collector-cert\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995439 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-signing-key\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdd8c\" (UniqueName: \"kubernetes.io/projected/a2d1bd43-b1f2-45bf-abfc-9e43609ee07f-kube-api-access-vdd8c\") pod \"package-server-manager-789f6589d5-tdrjl\" (UID: \"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995484 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ed67ec-0477-4a5b-8a35-e857d183ed53-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995870 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e04c1c07-99b1-4354-8f39-a16776c388aa-tmpfs\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e04c1c07-99b1-4354-8f39-a16776c388aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995936 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwdqd\" (UniqueName: \"kubernetes.io/projected/5d0f456a-ead3-4fc9-8532-46d629ebb86a-kube-api-access-kwdqd\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.995980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-service-ca\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9d4d34a-f5d6-425f-bb81-bad575d7178c-proxy-tls\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996039 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdpll\" (UniqueName: \"kubernetes.io/projected/1242c727-9313-453b-a4a4-899623f2413d-kube-api-access-fdpll\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996063 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbfmt\" (UniqueName: \"kubernetes.io/projected/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-kube-api-access-wbfmt\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1bd43-b1f2-45bf-abfc-9e43609ee07f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tdrjl\" (UID: \"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1242c727-9313-453b-a4a4-899623f2413d-node-bootstrap-token\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996166 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64ce0c87-2515-445e-ad80-e95bae36bfd0-srv-cert\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996190 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-socket-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsqf\" (UniqueName: \"kubernetes.io/projected/e04c1c07-99b1-4354-8f39-a16776c388aa-kube-api-access-5vsqf\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23229ef-e215-4e9f-a8e0-d38be72aef90-config-volume\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996278 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rffw\" (UniqueName: \"kubernetes.io/projected/3acc0e9c-36be-4834-8450-d68aec396f24-kube-api-access-9rffw\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf8bv\" (UID: \"3acc0e9c-36be-4834-8450-d68aec396f24\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed67ec-0477-4a5b-8a35-e857d183ed53-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996328 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-config\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-mountpoint-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-default-certificate\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/436b3b5b-76e0-416d-8f55-de0bb312f46d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996405 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lbwk\" (UniqueName: \"kubernetes.io/projected/d30f6634-bff6-4b14-a07c-752377452b53-kube-api-access-4lbwk\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996425 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4358264-0e5b-4c15-b34b-c65740995ec0-cert\") pod \"ingress-canary-gpwlx\" (UID: \"a4358264-0e5b-4c15-b34b-c65740995ec0\") " pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996453 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3acc0e9c-36be-4834-8450-d68aec396f24-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf8bv\" (UID: \"3acc0e9c-36be-4834-8450-d68aec396f24\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996625 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp"] Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.996993 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-serving-cert\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.997004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-stats-auth\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.997085 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pvjn5" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.997395 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08265d42-a708-4bf9-9e5c-a791becc2aa5-metrics-tls\") pod \"dns-operator-744455d44c-xbx5c\" (UID: \"08265d42-a708-4bf9-9e5c-a791becc2aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.997671 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-socket-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.997763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b95137c-8f1b-4e15-8ae2-4c6192118119-service-ca-bundle\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.997958 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-client\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:37 crc kubenswrapper[4760]: I1125 08:13:37.998268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-etcd-service-ca\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.001814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e04c1c07-99b1-4354-8f39-a16776c388aa-tmpfs\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.001839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-config\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.002632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ed67ec-0477-4a5b-8a35-e857d183ed53-config\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.002732 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d30f6634-bff6-4b14-a07c-752377452b53-metrics-tls\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.002791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/35250086-d3b8-4f83-a232-aba1a9d09bb2-mountpoint-dir\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.003110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23229ef-e215-4e9f-a8e0-d38be72aef90-config-volume\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.003603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e42451d2-d417-420f-b109-845278870cfb-profile-collector-cert\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.004158 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64ce0c87-2515-445e-ad80-e95bae36bfd0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.005016 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d0f456a-ead3-4fc9-8532-46d629ebb86a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.005996 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/436b3b5b-76e0-416d-8f55-de0bb312f46d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.007907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-proxy-tls\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.008816 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4358264-0e5b-4c15-b34b-c65740995ec0-cert\") pod \"ingress-canary-gpwlx\" (UID: \"a4358264-0e5b-4c15-b34b-c65740995ec0\") " pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.009432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm974\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-kube-api-access-cm974\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.013188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-serving-cert\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.013476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.014118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64ce0c87-2515-445e-ad80-e95bae36bfd0-srv-cert\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.015533 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.015816 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e04c1c07-99b1-4354-8f39-a16776c388aa-apiservice-cert\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.016005 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e04c1c07-99b1-4354-8f39-a16776c388aa-webhook-cert\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.016453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e42451d2-d417-420f-b109-845278870cfb-srv-cert\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.017581 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a23229ef-e215-4e9f-a8e0-d38be72aef90-secret-volume\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.018086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ed67ec-0477-4a5b-8a35-e857d183ed53-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.018304 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f9d4d34a-f5d6-425f-bb81-bad575d7178c-proxy-tls\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.018869 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3acc0e9c-36be-4834-8450-d68aec396f24-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf8bv\" (UID: \"3acc0e9c-36be-4834-8450-d68aec396f24\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.018883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1242c727-9313-453b-a4a4-899623f2413d-certs\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.019153 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b95137c-8f1b-4e15-8ae2-4c6192118119-metrics-certs\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.020029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1242c727-9313-453b-a4a4-899623f2413d-node-bootstrap-token\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.021130 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d15298c9-07f3-469c-a03d-007cc07146e1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jsphj\" (UID: \"d15298c9-07f3-469c-a03d-007cc07146e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.021843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-signing-key\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.025311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.026139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2d1bd43-b1f2-45bf-abfc-9e43609ee07f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tdrjl\" (UID: \"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.033927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwgh6\" (UniqueName: \"kubernetes.io/projected/916b7590-b541-4ca9-b432-861731b7ae94-kube-api-access-bwgh6\") pod \"console-f9d7485db-s4qrl\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.042746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6ddl\" (UniqueName: \"kubernetes.io/projected/c104cbfc-a1a1-4259-99c3-a304f01dbcb1-kube-api-access-z6ddl\") pod \"authentication-operator-69f744f599-trtj2\" (UID: \"c104cbfc-a1a1-4259-99c3-a304f01dbcb1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.097552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.098027 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.598015571 +0000 UTC m=+152.307046366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.114049 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldp8l\" (UniqueName: \"kubernetes.io/projected/35250086-d3b8-4f83-a232-aba1a9d09bb2-kube-api-access-ldp8l\") pod \"csi-hostpathplugin-h8svr\" (UID: \"35250086-d3b8-4f83-a232-aba1a9d09bb2\") " pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.126110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2xn\" (UniqueName: \"kubernetes.io/projected/e5ed67ec-0477-4a5b-8a35-e857d183ed53-kube-api-access-6j2xn\") pod \"openshift-apiserver-operator-796bbdcf4f-sp2hg\" (UID: \"e5ed67ec-0477-4a5b-8a35-e857d183ed53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.127182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4lb\" (UniqueName: \"kubernetes.io/projected/08265d42-a708-4bf9-9e5c-a791becc2aa5-kube-api-access-mq4lb\") pod \"dns-operator-744455d44c-xbx5c\" (UID: \"08265d42-a708-4bf9-9e5c-a791becc2aa5\") " pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.138425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh8bj\" (UniqueName: \"kubernetes.io/projected/64ce0c87-2515-445e-ad80-e95bae36bfd0-kube-api-access-vh8bj\") pod \"olm-operator-6b444d44fb-h4x6x\" (UID: \"64ce0c87-2515-445e-ad80-e95bae36bfd0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.145854 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.163011 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436b3b5b-76e0-416d-8f55-de0bb312f46d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8rhtx\" (UID: \"436b3b5b-76e0-416d-8f55-de0bb312f46d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.180017 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wgpm\" (UniqueName: \"kubernetes.io/projected/03b7a2c7-309a-4f84-8cf1-0dd3b0562544-kube-api-access-6wgpm\") pod \"service-ca-9c57cc56f-vttfl\" (UID: \"03b7a2c7-309a-4f84-8cf1-0dd3b0562544\") " pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.195494 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.198420 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.199130 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.699109429 +0000 UTC m=+152.408140224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.205355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9f4j\" (UniqueName: \"kubernetes.io/projected/03dbd329-8b62-4fd5-8cfe-87c495680e02-kube-api-access-q9f4j\") pod \"migrator-59844c95c7-grp6l\" (UID: \"03dbd329-8b62-4fd5-8cfe-87c495680e02\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.221815 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfrdn\" (UniqueName: \"kubernetes.io/projected/7b95137c-8f1b-4e15-8ae2-4c6192118119-kube-api-access-hfrdn\") pod \"router-default-5444994796-jw9hf\" (UID: \"7b95137c-8f1b-4e15-8ae2-4c6192118119\") " pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.232783 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.242757 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h4kq\" (UniqueName: \"kubernetes.io/projected/a23229ef-e215-4e9f-a8e0-d38be72aef90-kube-api-access-6h4kq\") pod \"collect-profiles-29400960-sxgpp\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.251005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.261061 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6w6bs"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.265595 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.267503 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898w6\" (UniqueName: \"kubernetes.io/projected/bee1b462-0d31-4a35-8fd6-5e4af0ff11f7-kube-api-access-898w6\") pod \"machine-config-operator-74547568cd-gx2mn\" (UID: \"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.268694 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.279888 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb7pj\" (UniqueName: \"kubernetes.io/projected/5916acf0-507c-45cd-84e7-7f70a0c8d0a4-kube-api-access-hb7pj\") pod \"etcd-operator-b45778765-hrhbl\" (UID: \"5916acf0-507c-45cd-84e7-7f70a0c8d0a4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.299973 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.300425 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.800408312 +0000 UTC m=+152.509439107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.303959 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrr4x\" (UniqueName: \"kubernetes.io/projected/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-kube-api-access-wrr4x\") pod \"marketplace-operator-79b997595-km6r5\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.325483 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.328069 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv88g\" (UniqueName: \"kubernetes.io/projected/f9d4d34a-f5d6-425f-bb81-bad575d7178c-kube-api-access-zv88g\") pod \"machine-config-controller-84d6567774-8g6nh\" (UID: \"f9d4d34a-f5d6-425f-bb81-bad575d7178c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.329622 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" Nov 25 08:13:38 crc kubenswrapper[4760]: W1125 08:13:38.330326 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffafdad_e326_4d95_8733_e5b5b2197ad9.slice/crio-2c069b3d27ed28f7ba4aa39b9980a1e4789d870f877e943f120d10a9f20a4767 WatchSource:0}: Error finding container 2c069b3d27ed28f7ba4aa39b9980a1e4789d870f877e943f120d10a9f20a4767: Status 404 returned error can't find the container with id 2c069b3d27ed28f7ba4aa39b9980a1e4789d870f877e943f120d10a9f20a4767 Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.335186 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.359558 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.363883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg92x\" (UniqueName: \"kubernetes.io/projected/d15298c9-07f3-469c-a03d-007cc07146e1-kube-api-access-hg92x\") pod \"multus-admission-controller-857f4d67dd-jsphj\" (UID: \"d15298c9-07f3-469c-a03d-007cc07146e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.366652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wvpt\" (UniqueName: \"kubernetes.io/projected/e42451d2-d417-420f-b109-845278870cfb-kube-api-access-8wvpt\") pod \"catalog-operator-68c6474976-4p4tt\" (UID: \"e42451d2-d417-420f-b109-845278870cfb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.384197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m52mh\" (UniqueName: \"kubernetes.io/projected/a4358264-0e5b-4c15-b34b-c65740995ec0-kube-api-access-m52mh\") pod \"ingress-canary-gpwlx\" (UID: \"a4358264-0e5b-4c15-b34b-c65740995ec0\") " pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.387824 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.393357 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.398919 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwdqd\" (UniqueName: \"kubernetes.io/projected/5d0f456a-ead3-4fc9-8532-46d629ebb86a-kube-api-access-kwdqd\") pod \"kube-storage-version-migrator-operator-b67b599dd-bpqx2\" (UID: \"5d0f456a-ead3-4fc9-8532-46d629ebb86a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.401005 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.401319 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.901286973 +0000 UTC m=+152.610317769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.401573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.402004 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:38.901993404 +0000 UTC m=+152.611024199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.406165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" Nov 25 08:13:38 crc kubenswrapper[4760]: W1125 08:13:38.409361 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b95137c_8f1b_4e15_8ae2_4c6192118119.slice/crio-9ab572bb14b097f7f75039aff4fae77e85fb7f35fb37a472e7df6e6341ceb4f0 WatchSource:0}: Error finding container 9ab572bb14b097f7f75039aff4fae77e85fb7f35fb37a472e7df6e6341ceb4f0: Status 404 returned error can't find the container with id 9ab572bb14b097f7f75039aff4fae77e85fb7f35fb37a472e7df6e6341ceb4f0 Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.410109 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" Nov 25 08:13:38 crc kubenswrapper[4760]: W1125 08:13:38.413527 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ee6c92_f652_4da5_8291_f3fedd05be84.slice/crio-cb1b9549a76c77138e545fdbd56d1903d82e80171605307d7bd8a3b21783e220 WatchSource:0}: Error finding container cb1b9549a76c77138e545fdbd56d1903d82e80171605307d7bd8a3b21783e220: Status 404 returned error can't find the container with id cb1b9549a76c77138e545fdbd56d1903d82e80171605307d7bd8a3b21783e220 Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.417100 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.424761 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.426824 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbfmt\" (UniqueName: \"kubernetes.io/projected/59a3d8d6-d2cf-48ff-852f-de1f2f0de439-kube-api-access-wbfmt\") pod \"service-ca-operator-777779d784-2m676\" (UID: \"59a3d8d6-d2cf-48ff-852f-de1f2f0de439\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.437062 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.440508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.450789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.459091 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rffw\" (UniqueName: \"kubernetes.io/projected/3acc0e9c-36be-4834-8450-d68aec396f24-kube-api-access-9rffw\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf8bv\" (UID: \"3acc0e9c-36be-4834-8450-d68aec396f24\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.460116 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.460182 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9dz6w"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.462973 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdpll\" (UniqueName: \"kubernetes.io/projected/1242c727-9313-453b-a4a4-899623f2413d-kube-api-access-fdpll\") pod \"machine-config-server-9g4f5\" (UID: \"1242c727-9313-453b-a4a4-899623f2413d\") " pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.473237 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.487502 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.489314 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.490174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdd8c\" (UniqueName: \"kubernetes.io/projected/a2d1bd43-b1f2-45bf-abfc-9e43609ee07f-kube-api-access-vdd8c\") pod \"package-server-manager-789f6589d5-tdrjl\" (UID: \"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.502343 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.502710 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.00268918 +0000 UTC m=+152.711719975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.507298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.510831 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.515861 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lbwk\" (UniqueName: \"kubernetes.io/projected/d30f6634-bff6-4b14-a07c-752377452b53-kube-api-access-4lbwk\") pod \"dns-default-rml2b\" (UID: \"d30f6634-bff6-4b14-a07c-752377452b53\") " pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.518933 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9g4f5" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.525757 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsqf\" (UniqueName: \"kubernetes.io/projected/e04c1c07-99b1-4354-8f39-a16776c388aa-kube-api-access-5vsqf\") pod \"packageserver-d55dfcdfc-j7rdl\" (UID: \"e04c1c07-99b1-4354-8f39-a16776c388aa\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.558625 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gpwlx" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.566751 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:38 crc kubenswrapper[4760]: W1125 08:13:38.574502 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod773a65eb_f881_42b1_a499_9dd15265f638.slice/crio-e0c7e8c18ec20fc5659c0b6062fdda9c19d945074d2ef0e0c2c6477921998cb7 WatchSource:0}: Error finding container e0c7e8c18ec20fc5659c0b6062fdda9c19d945074d2ef0e0c2c6477921998cb7: Status 404 returned error can't find the container with id e0c7e8c18ec20fc5659c0b6062fdda9c19d945074d2ef0e0c2c6477921998cb7 Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.603757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.607404 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xbx5c"] Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.607441 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.107421334 +0000 UTC m=+152.816452129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.621756 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-46x6w"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.631968 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.632845 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pvjn5"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.659441 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" event={"ID":"8cd6819e-5f95-4734-90ef-484b3362a7c9","Type":"ContainerStarted","Data":"520b8aac8447e2dfcec90ce725e5085b59f8b2a8ab095505b7eb0bad1f6cae03"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.659512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" event={"ID":"8cd6819e-5f95-4734-90ef-484b3362a7c9","Type":"ContainerStarted","Data":"451df1e5936bcd846f679bf745cebe45a5377e43a2d0c2c1fb7ab0579d2118b7"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.667644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" event={"ID":"73ee8ff0-97e5-4ce1-aba3-110933546bab","Type":"ContainerStarted","Data":"0533f9906ff182bd024663f3ebbe9e2582209141e0b5eabfec28145d7689214e"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.682930 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jw9hf" event={"ID":"7b95137c-8f1b-4e15-8ae2-4c6192118119","Type":"ContainerStarted","Data":"9ab572bb14b097f7f75039aff4fae77e85fb7f35fb37a472e7df6e6341ceb4f0"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.685805 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-trtj2"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.687207 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.690291 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" event={"ID":"4cbc0deb-c42c-40bb-b313-44957ed5b688","Type":"ContainerStarted","Data":"4c06c4738bd641e7492aa26ee424bd234e6485f75d675359d96a8fa7c51958f9"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.690338 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" event={"ID":"4cbc0deb-c42c-40bb-b313-44957ed5b688","Type":"ContainerStarted","Data":"131945909f1ff753b8ea66444fde42cedc26cb22b358554ca744b15e928d8664"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.691492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" event={"ID":"6fce3bec-6d01-47d6-aa9e-ca61f62921c8","Type":"ContainerStarted","Data":"2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.691525 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" event={"ID":"6fce3bec-6d01-47d6-aa9e-ca61f62921c8","Type":"ContainerStarted","Data":"1a71cb68f18b4aedf5744fd67fce57602e1d49824a65249552de2a32db401d39"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.692281 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.698451 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-bsp8l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.698827 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" podUID="6fce3bec-6d01-47d6-aa9e-ca61f62921c8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.704610 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.704751 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.204727291 +0000 UTC m=+152.913758086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.704837 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.705194 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.205182574 +0000 UTC m=+152.914213369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.706295 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" event={"ID":"a8ee6c92-f652-4da5-8291-f3fedd05be84","Type":"ContainerStarted","Data":"cb1b9549a76c77138e545fdbd56d1903d82e80171605307d7bd8a3b21783e220"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.726208 4760 generic.go:334] "Generic (PLEG): container finished" podID="1556c569-2bfa-4b43-ac95-468f72dbcb94" containerID="8bd814fc9c321e51bb67a92c9c1df7d485b42bcd44e74ca65f3b643cb6eeee0d" exitCode=0 Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.726591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" event={"ID":"1556c569-2bfa-4b43-ac95-468f72dbcb94","Type":"ContainerDied","Data":"8bd814fc9c321e51bb67a92c9c1df7d485b42bcd44e74ca65f3b643cb6eeee0d"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.726647 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" event={"ID":"1556c569-2bfa-4b43-ac95-468f72dbcb94","Type":"ContainerStarted","Data":"fe34b54bb4d6c86c1d131670d2bd1387862c228cf026bdb8cd37ecf1dd1060ee"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.733375 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.733681 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" event={"ID":"773a65eb-f881-42b1-a499-9dd15265f638","Type":"ContainerStarted","Data":"e0c7e8c18ec20fc5659c0b6062fdda9c19d945074d2ef0e0c2c6477921998cb7"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.752115 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" event={"ID":"1ffafdad-e326-4d95-8733-e5b5b2197ad9","Type":"ContainerStarted","Data":"2c069b3d27ed28f7ba4aa39b9980a1e4789d870f877e943f120d10a9f20a4767"} Nov 25 08:13:38 crc kubenswrapper[4760]: W1125 08:13:38.758476 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc104cbfc_a1a1_4259_99c3_a304f01dbcb1.slice/crio-94f9b0abdaf69fc488b0cf59bbefb4b0e6da3fd5c5fba322a5371fef08fb0b75 WatchSource:0}: Error finding container 94f9b0abdaf69fc488b0cf59bbefb4b0e6da3fd5c5fba322a5371fef08fb0b75: Status 404 returned error can't find the container with id 94f9b0abdaf69fc488b0cf59bbefb4b0e6da3fd5c5fba322a5371fef08fb0b75 Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.759219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" event={"ID":"6081bf3c-671c-46d5-8fbf-df633064cbe7","Type":"ContainerStarted","Data":"7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.759276 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" event={"ID":"6081bf3c-671c-46d5-8fbf-df633064cbe7","Type":"ContainerStarted","Data":"9e39f0769d491a8afb3f18c4fcd849ccee93161d6e625cbb71fe19ecab608a1d"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.760400 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.762168 4760 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-trtpm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.762201 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" podUID="6081bf3c-671c-46d5-8fbf-df633064cbe7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.764117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" event={"ID":"04581d94-b273-433b-a481-aa41acb8dbd4","Type":"ContainerStarted","Data":"f648fd4ccbf92ea9324b355f6c5b9484c127a6f898d1ba0dfdf4cba0b4617b7d"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.776923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" event={"ID":"c0bfdc5b-7be8-4072-a9fc-342231fefc83","Type":"ContainerStarted","Data":"b28ddba02a15812901c898dba8c727a266df28a133149c93f4ead07917ad6cce"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.796366 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.801345 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" event={"ID":"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25","Type":"ContainerStarted","Data":"dfb05b0371ebd78f45ab7212825bdc43d5c6c95cff66eb6470d82563841115bd"} Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.806280 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.806693 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.306639112 +0000 UTC m=+153.015669907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.884868 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jsphj"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.909490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:38 crc kubenswrapper[4760]: E1125 08:13:38.911622 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.411599493 +0000 UTC m=+153.120630448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.927159 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h8svr"] Nov 25 08:13:38 crc kubenswrapper[4760]: I1125 08:13:38.952317 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.011633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.012076 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.512054762 +0000 UTC m=+153.221085557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.021179 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.026971 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.030238 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.034847 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-s4qrl"] Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.116041 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.616026314 +0000 UTC m=+153.325057109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.114837 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.220488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.220862 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.72084536 +0000 UTC m=+153.429876145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.221311 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.221859 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.72184091 +0000 UTC m=+153.430871705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.235023 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vttfl"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.329496 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.329916 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.829892401 +0000 UTC m=+153.538923206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: W1125 08:13:39.394759 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbee1b462_0d31_4a35_8fd6_5e4af0ff11f7.slice/crio-389ce2ccd6b80644092691c37d956f4249615d28f6a93882385364dbd3e2f0a7 WatchSource:0}: Error finding container 389ce2ccd6b80644092691c37d956f4249615d28f6a93882385364dbd3e2f0a7: Status 404 returned error can't find the container with id 389ce2ccd6b80644092691c37d956f4249615d28f6a93882385364dbd3e2f0a7 Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.400952 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" podStartSLOduration=128.400920216 podStartE2EDuration="2m8.400920216s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:39.399091163 +0000 UTC m=+153.108121978" watchObservedRunningTime="2025-11-25 08:13:39.400920216 +0000 UTC m=+153.109951011" Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.447067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.447448 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:39.947432351 +0000 UTC m=+153.656463146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.476100 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.503453 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-km6r5"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.525606 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt"] Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.549660 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.549827 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.049808297 +0000 UTC m=+153.758839092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.550029 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.550381 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.050366003 +0000 UTC m=+153.759396798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.651227 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.651510 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.151493911 +0000 UTC m=+153.860524706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.753425 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.766892 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.266877728 +0000 UTC m=+153.975908523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.768078 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" podStartSLOduration=128.768061983 podStartE2EDuration="2m8.768061983s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:39.728938535 +0000 UTC m=+153.437969340" watchObservedRunningTime="2025-11-25 08:13:39.768061983 +0000 UTC m=+153.477092778" Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.817156 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" event={"ID":"8cd6819e-5f95-4734-90ef-484b3362a7c9","Type":"ContainerStarted","Data":"3228ae9649947e566909cb4b8b2add50a8bc2eb4b47c9211e046210fb9a8aa97"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.830344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" event={"ID":"930152c4-9e5c-47e6-8c3b-46678c063e8f","Type":"ContainerStarted","Data":"8850d2e9d2639506a212ce8b783d4f621b9186d52fc669b98df43df738531e72"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.858821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" event={"ID":"4cbc0deb-c42c-40bb-b313-44957ed5b688","Type":"ContainerStarted","Data":"7500999ed04b8180fa634897dcbefcbe410c0f17a5ee571491f160824ecdb1c5"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.866665 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" event={"ID":"04581d94-b273-433b-a481-aa41acb8dbd4","Type":"ContainerStarted","Data":"44822a0a40b85e6880af38e3bf907dacdca29762b6dd65aa5dfe0626854aaa81"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.867524 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.868505 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.368484401 +0000 UTC m=+154.077515196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.891132 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" event={"ID":"c2ba0eed-5fca-4f5d-9572-c44cd2ee0b25","Type":"ContainerStarted","Data":"0316a1f8a78182f2e3c13919adc87fa8214496bf6651ce93f4de94c1a8b7e989"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.892825 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9g4f5" event={"ID":"1242c727-9313-453b-a4a4-899623f2413d","Type":"ContainerStarted","Data":"35d221c44b99b0fa32f4d63b711bc91a2b9aac9a89e30756c16d8170776bc03b"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.896809 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" event={"ID":"03dbd329-8b62-4fd5-8cfe-87c495680e02","Type":"ContainerStarted","Data":"5acbe9653240829d5bdf8fa561516121cf14760eddb85c19f5cadc1c6b5c3739"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.925478 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" event={"ID":"773a65eb-f881-42b1-a499-9dd15265f638","Type":"ContainerStarted","Data":"be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.927122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.933472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" event={"ID":"03b7a2c7-309a-4f84-8cf1-0dd3b0562544","Type":"ContainerStarted","Data":"3ce853e7dd8f69eed673824946a69610ff0f1e1f0f2540e9e15539427dac8e46"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.935843 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tss44 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.935916 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" podUID="773a65eb-f881-42b1-a499-9dd15265f638" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.941993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" event={"ID":"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7","Type":"ContainerStarted","Data":"389ce2ccd6b80644092691c37d956f4249615d28f6a93882385364dbd3e2f0a7"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.943798 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pvjn5" event={"ID":"f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945","Type":"ContainerStarted","Data":"158b276e73f687a1675479090d4a46d0bf32bb3f76cedbd7c3f6ee1d122bfc45"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.948512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" event={"ID":"c104cbfc-a1a1-4259-99c3-a304f01dbcb1","Type":"ContainerStarted","Data":"94f9b0abdaf69fc488b0cf59bbefb4b0e6da3fd5c5fba322a5371fef08fb0b75"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.960619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" event={"ID":"c0bfdc5b-7be8-4072-a9fc-342231fefc83","Type":"ContainerStarted","Data":"62124032b8890b7e00c6bd1cd3bbce1aaf0ac92d6075ad066684a269d59325a3"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.971338 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.973069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jw9hf" event={"ID":"7b95137c-8f1b-4e15-8ae2-4c6192118119","Type":"ContainerStarted","Data":"5df346c898cac952f7d013e10bcabb3e9329f3fcff5d4b9d830673d7602c0e2d"} Nov 25 08:13:39 crc kubenswrapper[4760]: E1125 08:13:39.973857 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.473846364 +0000 UTC m=+154.182877159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.981892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" event={"ID":"73ee8ff0-97e5-4ce1-aba3-110933546bab","Type":"ContainerStarted","Data":"833e3cfafc7d6946149683cc813b00e3650ac0217936cae8a42a727c271363a6"} Nov 25 08:13:39 crc kubenswrapper[4760]: I1125 08:13:39.987012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" event={"ID":"f9d4d34a-f5d6-425f-bb81-bad575d7178c","Type":"ContainerStarted","Data":"bc9f54b7b9ca4e779a3e58d71fc4d877c197c4068791f58e2dbe5356620e85c0"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.012157 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" event={"ID":"64ce0c87-2515-445e-ad80-e95bae36bfd0","Type":"ContainerStarted","Data":"3b5d65d17fcf9c658bcfb96b6e124b17534f25c1e260ca997d19499f303068a4"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.028482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" event={"ID":"35250086-d3b8-4f83-a232-aba1a9d09bb2","Type":"ContainerStarted","Data":"42bc8776f1e7418db4d22f5b98023c8acf8769d18595555fecbed3178371e9fd"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.035379 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s4qrl" event={"ID":"916b7590-b541-4ca9-b432-861731b7ae94","Type":"ContainerStarted","Data":"5a270b6a4a5cc04ff580798c3b7503db16c8ab4f2644fdf93145bdc89c25a1df"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.041127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" event={"ID":"d15298c9-07f3-469c-a03d-007cc07146e1","Type":"ContainerStarted","Data":"7d62edcb55091627afb637f915a8df15365bacc322395c203490d6474344ecc9"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.054529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-46x6w" event={"ID":"992a574c-b7d7-467f-be0a-98be57052cb6","Type":"ContainerStarted","Data":"fc6cb2b91b89dbd9b6c6927f74604ad118853d9628babeb65518e8c1a49bc256"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.059781 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" event={"ID":"08265d42-a708-4bf9-9e5c-a791becc2aa5","Type":"ContainerStarted","Data":"6a60aa9cd86842600ca29b3cea8f3e65ddcea0b4e6db84cde4fdbc3d790e45d3"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.061854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" event={"ID":"1ffafdad-e326-4d95-8733-e5b5b2197ad9","Type":"ContainerStarted","Data":"8be0a789fd06c53c0c89d59a67763b6e037d7860b48d7167faa80a3fecec19ff"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.064709 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" event={"ID":"e5ed67ec-0477-4a5b-8a35-e857d183ed53","Type":"ContainerStarted","Data":"4be4660b4c49fd4a4350f2519a476f359fe953eb2ae64cdd19554ee26afc75cf"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.074683 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.075901 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.575880709 +0000 UTC m=+154.284911504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.085129 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" event={"ID":"3191076e-9c9c-4e4a-923f-3189e4414342","Type":"ContainerStarted","Data":"36b859f37343b0b33801dd75e5322454df5f51653c1618f0b09d481f69de0b99"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.099348 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" event={"ID":"aec2d73b-e942-4f98-9b84-539bcc3e6fa8","Type":"ContainerStarted","Data":"59aa9e81f4aa1e230e96bc705f1534c00178548244a1e2908c787139a91edc68"} Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.113460 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.126794 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.177969 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.179232 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.679215882 +0000 UTC m=+154.388246677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.282157 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.287984 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.787963284 +0000 UTC m=+154.496994079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.311951 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.331169 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.337442 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.351455 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:40 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:40 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:40 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.351524 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.405378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.405840 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:40.905819124 +0000 UTC m=+154.614849919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.496613 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-g8jn9" podStartSLOduration=129.496594918 podStartE2EDuration="2m9.496594918s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:40.495418034 +0000 UTC m=+154.204448839" watchObservedRunningTime="2025-11-25 08:13:40.496594918 +0000 UTC m=+154.205625713" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.506161 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.506555 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.00653994 +0000 UTC m=+154.715570735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.527731 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hrhbl"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.539703 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jw9hf" podStartSLOduration=129.539683013 podStartE2EDuration="2m9.539683013s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:40.538239481 +0000 UTC m=+154.247270296" watchObservedRunningTime="2025-11-25 08:13:40.539683013 +0000 UTC m=+154.248713808" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.548302 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.556444 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.567667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.570095 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.584196 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wvtqd" podStartSLOduration=129.584170439 podStartE2EDuration="2m9.584170439s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:40.579473581 +0000 UTC m=+154.288504376" watchObservedRunningTime="2025-11-25 08:13:40.584170439 +0000 UTC m=+154.293201264" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.592735 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rml2b"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.607879 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.608159 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.108146783 +0000 UTC m=+154.817177568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.613369 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gpwlx"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.630616 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2m676"] Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.696341 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k6cm2" podStartSLOduration=129.69632295 podStartE2EDuration="2m9.69632295s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:40.695855786 +0000 UTC m=+154.404886581" watchObservedRunningTime="2025-11-25 08:13:40.69632295 +0000 UTC m=+154.405353745" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.712866 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.713083 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.213058111 +0000 UTC m=+154.922088966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.730899 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" podStartSLOduration=129.730878934 podStartE2EDuration="2m9.730878934s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:40.727515366 +0000 UTC m=+154.436546181" watchObservedRunningTime="2025-11-25 08:13:40.730878934 +0000 UTC m=+154.439909729" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.751997 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k4jd9" podStartSLOduration=129.751977744 podStartE2EDuration="2m9.751977744s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:40.75149985 +0000 UTC m=+154.460530645" watchObservedRunningTime="2025-11-25 08:13:40.751977744 +0000 UTC m=+154.461008539" Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.815173 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.815555 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.315538099 +0000 UTC m=+155.024568894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:40 crc kubenswrapper[4760]: I1125 08:13:40.915710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:40 crc kubenswrapper[4760]: E1125 08:13:40.916444 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.416424431 +0000 UTC m=+155.125455226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.028421 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.028767 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.528753138 +0000 UTC m=+155.237783933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.129520 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.129972 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.629951529 +0000 UTC m=+155.338982324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.150917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" event={"ID":"08265d42-a708-4bf9-9e5c-a791becc2aa5","Type":"ContainerStarted","Data":"ee148f16a16aad27689b9c696bc29be1479804f7554d2424fae71e911c5ee2e3"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.156787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" event={"ID":"a23229ef-e215-4e9f-a8e0-d38be72aef90","Type":"ContainerStarted","Data":"3c7f468f7a660ec5b7443959d1761c70bf6f211802f13c8e7ebc3fa52133118a"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.164715 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8ee6c92-f652-4da5-8291-f3fedd05be84" containerID="89c1e9cd5b4732005eaf8811e1ae0be2ffdbb178d537bae479b7b7326f9669fe" exitCode=0 Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.164782 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" event={"ID":"a8ee6c92-f652-4da5-8291-f3fedd05be84","Type":"ContainerDied","Data":"89c1e9cd5b4732005eaf8811e1ae0be2ffdbb178d537bae479b7b7326f9669fe"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.198214 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-66bd7" podStartSLOduration=130.198193882 podStartE2EDuration="2m10.198193882s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:40.781975894 +0000 UTC m=+154.491006689" watchObservedRunningTime="2025-11-25 08:13:41.198193882 +0000 UTC m=+154.907224677" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.232541 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" event={"ID":"59a3d8d6-d2cf-48ff-852f-de1f2f0de439","Type":"ContainerStarted","Data":"347f53c60684652b2987295ebff72342fa05cd5817054de7d634cdcee4621619"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.233729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.235036 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.735020613 +0000 UTC m=+155.444051408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.255280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" event={"ID":"64ce0c87-2515-445e-ad80-e95bae36bfd0","Type":"ContainerStarted","Data":"10ff2859ba622e70323e22e30215dbb180f69797aae7796cc944722cdbdf1ff3"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.275287 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.276485 4760 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-h4x6x container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.276529 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" podUID="64ce0c87-2515-445e-ad80-e95bae36bfd0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.289504 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" event={"ID":"1ffafdad-e326-4d95-8733-e5b5b2197ad9","Type":"ContainerStarted","Data":"3f07862a92e8ef14a84828022c90553e518e08250f5f9e331f35a806f3e91d93"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.330471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" event={"ID":"e04c1c07-99b1-4354-8f39-a16776c388aa","Type":"ContainerStarted","Data":"441c41aee445767c8aae9f12fdc45901830f60a9786510393835a20c0abeca07"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.334597 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.334876 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.834859283 +0000 UTC m=+155.543890078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.344771 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:41 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:41 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:41 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.344844 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.356430 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" event={"ID":"1556c569-2bfa-4b43-ac95-468f72dbcb94","Type":"ContainerStarted","Data":"f339321ad81aa0e9a857696f4b8b56e9b3ebfd6216298f359789ca9800ac4cb6"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.356497 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.359022 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" podStartSLOduration=130.359009582 podStartE2EDuration="2m10.359009582s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.318703489 +0000 UTC m=+155.027734284" watchObservedRunningTime="2025-11-25 08:13:41.359009582 +0000 UTC m=+155.068040377" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.361310 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-6w6bs" podStartSLOduration=130.361291899 podStartE2EDuration="2m10.361291899s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.358436045 +0000 UTC m=+155.067466840" watchObservedRunningTime="2025-11-25 08:13:41.361291899 +0000 UTC m=+155.070322694" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.382837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" event={"ID":"e5ed67ec-0477-4a5b-8a35-e857d183ed53","Type":"ContainerStarted","Data":"801f64d46de271936f7bbd6d56154ad6bc83bb47a3542f0d84094e28f0761271"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.395173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" event={"ID":"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7","Type":"ContainerStarted","Data":"1de48ef39fc421819889ce8431ee7bfa74a5e0b04b1e5f26f9d8a64b0a7b5729"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.409724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" event={"ID":"c104cbfc-a1a1-4259-99c3-a304f01dbcb1","Type":"ContainerStarted","Data":"7b565ada26fbcf39d8fdcac45940e2657b5bf22f6b81c63f11c2c6591785348a"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.438399 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.439744 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:41.939722591 +0000 UTC m=+155.648753446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.441084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rml2b" event={"ID":"d30f6634-bff6-4b14-a07c-752377452b53","Type":"ContainerStarted","Data":"b5d203edf0d5035c4941db6e078f4e17bb1fac9bbbcbf587ed156158acceab67"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.472785 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-46x6w" event={"ID":"992a574c-b7d7-467f-be0a-98be57052cb6","Type":"ContainerStarted","Data":"7b4bde4f7e99b421485a57174c716e8a599924953de65a26174864784f53a46b"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.473304 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" podStartSLOduration=130.473291717 podStartE2EDuration="2m10.473291717s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.431892702 +0000 UTC m=+155.140923497" watchObservedRunningTime="2025-11-25 08:13:41.473291717 +0000 UTC m=+155.182322512" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.473466 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.473647 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-sp2hg" podStartSLOduration=130.473638777 podStartE2EDuration="2m10.473638777s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.473220315 +0000 UTC m=+155.182251110" watchObservedRunningTime="2025-11-25 08:13:41.473638777 +0000 UTC m=+155.182669572" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.478307 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pvjn5" event={"ID":"f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945","Type":"ContainerStarted","Data":"168121fc565ef769cd42db746634eefb0d41822e4c0794001655fed5679e8f36"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.479120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-pvjn5" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.483988 4760 patch_prober.go:28] interesting pod/console-operator-58897d9998-46x6w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.484025 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-46x6w" podUID="992a574c-b7d7-467f-be0a-98be57052cb6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.484086 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-pvjn5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.484098 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pvjn5" podUID="f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.489087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" event={"ID":"73ee8ff0-97e5-4ce1-aba3-110933546bab","Type":"ContainerStarted","Data":"21fb34975fb55cdc12341b48a2c7b1ec103cc238b5e7b00670a45bc21100298e"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.490511 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gpwlx" event={"ID":"a4358264-0e5b-4c15-b34b-c65740995ec0","Type":"ContainerStarted","Data":"2606372a104c7aed363e3e9a2196c9a3e1a1522c71595261291d4f97b3819047"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.491270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" event={"ID":"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f","Type":"ContainerStarted","Data":"bd533dba058e2bdb22c84f6bb6e5edc137d2ae84179ffe2302f9e80d2cef5a3f"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.499934 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" event={"ID":"03dbd329-8b62-4fd5-8cfe-87c495680e02","Type":"ContainerStarted","Data":"4bb205fc74f4e82c013de40cfc8384b29e8660d10723efacad24f1b5c51ea60c"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.517263 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" event={"ID":"5916acf0-507c-45cd-84e7-7f70a0c8d0a4","Type":"ContainerStarted","Data":"61d3db900a2acb137e65887a6e28f80f5db0ee36ccd055507f9f65e2aec5bb68"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.534893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" event={"ID":"f9d4d34a-f5d6-425f-bb81-bad575d7178c","Type":"ContainerStarted","Data":"eda4ed5c630c14c382fd152bb3aa11b1da4f08577ce8e76379ae5dce335d126d"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.537593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" event={"ID":"d15298c9-07f3-469c-a03d-007cc07146e1","Type":"ContainerStarted","Data":"b1b5281198553f8cfe0f5841594964212ddf0428940802668f5d588d607d8f8e"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.539532 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-trtj2" podStartSLOduration=130.539517711 podStartE2EDuration="2m10.539517711s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.533829324 +0000 UTC m=+155.242860119" watchObservedRunningTime="2025-11-25 08:13:41.539517711 +0000 UTC m=+155.248548506" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.540371 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.540815 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.040800219 +0000 UTC m=+155.749831014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.554567 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9g4f5" event={"ID":"1242c727-9313-453b-a4a4-899623f2413d","Type":"ContainerStarted","Data":"9f2982e3c6e2a9f65cf602d76897d0bd06e6d14e3da09525e0ab060c858c259d"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.588947 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" event={"ID":"03b7a2c7-309a-4f84-8cf1-0dd3b0562544","Type":"ContainerStarted","Data":"869495f5d0e831305b0caa8f43817125a4b854647d9d6b53632f3a2088de2ad7"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.606980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" event={"ID":"aec2d73b-e942-4f98-9b84-539bcc3e6fa8","Type":"ContainerStarted","Data":"04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.607818 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.612387 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-km6r5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.612648 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.618496 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" event={"ID":"3acc0e9c-36be-4834-8450-d68aec396f24","Type":"ContainerStarted","Data":"6a4a473a67519ad3508e33f2124555736525d643ed4113fc4e7c6a136bef5750"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.620111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" event={"ID":"5d0f456a-ead3-4fc9-8532-46d629ebb86a","Type":"ContainerStarted","Data":"a77a66ff3c00cadc1cf316d4e2db3f47095f63dbb297f07ce943d6fedae9b034"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.621607 4760 generic.go:334] "Generic (PLEG): container finished" podID="930152c4-9e5c-47e6-8c3b-46678c063e8f" containerID="578c382ea7939172f586ab7985f369c04163919645559e52f71e23570cb5eab2" exitCode=0 Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.621655 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" event={"ID":"930152c4-9e5c-47e6-8c3b-46678c063e8f","Type":"ContainerDied","Data":"578c382ea7939172f586ab7985f369c04163919645559e52f71e23570cb5eab2"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.627913 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" event={"ID":"3191076e-9c9c-4e4a-923f-3189e4414342","Type":"ContainerStarted","Data":"5bb7aa9decdcd0b2d1f6122648a0ca31265b67e7cc0fc557d5c06b01b75deb4a"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.644465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.651837 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.151819807 +0000 UTC m=+155.860850602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.658371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" event={"ID":"436b3b5b-76e0-416d-8f55-de0bb312f46d","Type":"ContainerStarted","Data":"2c430cbc44059d0e72616d2e4623fc730ee89d1575f814d861ddaecf4141cda6"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.681528 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xvqpn" podStartSLOduration=130.681509549 podStartE2EDuration="2m10.681509549s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.589835198 +0000 UTC m=+155.298865983" watchObservedRunningTime="2025-11-25 08:13:41.681509549 +0000 UTC m=+155.390540354" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.683144 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-46x6w" podStartSLOduration=130.683134437 podStartE2EDuration="2m10.683134437s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.680733646 +0000 UTC m=+155.389764441" watchObservedRunningTime="2025-11-25 08:13:41.683134437 +0000 UTC m=+155.392165232" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.701637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" event={"ID":"e42451d2-d417-420f-b109-845278870cfb","Type":"ContainerStarted","Data":"53745a4344879e0752323b7efbb50d4e7bee5da27c7a0442a9a3d1b286f73f81"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.701694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" event={"ID":"e42451d2-d417-420f-b109-845278870cfb","Type":"ContainerStarted","Data":"61a4074c37af50c9d595f90c94582757aaec8a4795db7761420185dfd2364cf3"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.702724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.706194 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-pvjn5" podStartSLOduration=130.706177163 podStartE2EDuration="2m10.706177163s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.705616596 +0000 UTC m=+155.414647401" watchObservedRunningTime="2025-11-25 08:13:41.706177163 +0000 UTC m=+155.415207968" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.713642 4760 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4p4tt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.713695 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" podUID="e42451d2-d417-420f-b109-845278870cfb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.730371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s4qrl" event={"ID":"916b7590-b541-4ca9-b432-861731b7ae94","Type":"ContainerStarted","Data":"4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a"} Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.741602 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" podStartSLOduration=130.741583042 podStartE2EDuration="2m10.741583042s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.741108468 +0000 UTC m=+155.450139263" watchObservedRunningTime="2025-11-25 08:13:41.741583042 +0000 UTC m=+155.450613837" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.746265 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.756006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.780078 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.278671611 +0000 UTC m=+155.987702406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.786062 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" podStartSLOduration=130.786039037 podStartE2EDuration="2m10.786039037s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.7850994 +0000 UTC m=+155.494130195" watchObservedRunningTime="2025-11-25 08:13:41.786039037 +0000 UTC m=+155.495069832" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.850827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.854301 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.35428602 +0000 UTC m=+156.063316815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.913488 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vttfl" podStartSLOduration=130.913448137 podStartE2EDuration="2m10.913448137s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.868024284 +0000 UTC m=+155.577055079" watchObservedRunningTime="2025-11-25 08:13:41.913448137 +0000 UTC m=+155.622478932" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.917887 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m2vzk" podStartSLOduration=130.917837326 podStartE2EDuration="2m10.917837326s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.915618681 +0000 UTC m=+155.624649476" watchObservedRunningTime="2025-11-25 08:13:41.917837326 +0000 UTC m=+155.626868121" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.974757 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9g4f5" podStartSLOduration=6.974737506 podStartE2EDuration="6.974737506s" podCreationTimestamp="2025-11-25 08:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.937187534 +0000 UTC m=+155.646218329" watchObservedRunningTime="2025-11-25 08:13:41.974737506 +0000 UTC m=+155.683768311" Nov 25 08:13:41 crc kubenswrapper[4760]: I1125 08:13:41.977206 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:41 crc kubenswrapper[4760]: E1125 08:13:41.977588 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.477567179 +0000 UTC m=+156.186597974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.008803 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" podStartSLOduration=131.008784946 podStartE2EDuration="2m11.008784946s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:41.998007439 +0000 UTC m=+155.707038244" watchObservedRunningTime="2025-11-25 08:13:42.008784946 +0000 UTC m=+155.717815741" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.042677 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" podStartSLOduration=131.0426578 podStartE2EDuration="2m11.0426578s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.038853608 +0000 UTC m=+155.747884413" watchObservedRunningTime="2025-11-25 08:13:42.0426578 +0000 UTC m=+155.751688605" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.079842 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.080206 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.580193852 +0000 UTC m=+156.289224647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.086327 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-s4qrl" podStartSLOduration=131.086309951 podStartE2EDuration="2m11.086309951s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.085815487 +0000 UTC m=+155.794846302" watchObservedRunningTime="2025-11-25 08:13:42.086309951 +0000 UTC m=+155.795340746" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.182845 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.183285 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.683265277 +0000 UTC m=+156.392296072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.284981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.285465 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.785445556 +0000 UTC m=+156.494476421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.341865 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:42 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:42 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:42 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.342211 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.387033 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.387230 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.887198513 +0000 UTC m=+156.596229318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.387359 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.387678 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.887667717 +0000 UTC m=+156.596698702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.488744 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.488911 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.988877458 +0000 UTC m=+156.697908253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.489098 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.489502 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:42.989491336 +0000 UTC m=+156.698522131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.589782 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.589994 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.089960735 +0000 UTC m=+156.798991530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.590160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.590549 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.090539182 +0000 UTC m=+156.799569977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.628221 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.690793 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.690955 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.190929259 +0000 UTC m=+156.899960054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.691024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.691327 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.1913194 +0000 UTC m=+156.900350195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.735352 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" event={"ID":"bee1b462-0d31-4a35-8fd6-5e4af0ff11f7","Type":"ContainerStarted","Data":"3758ed26aad2b91bd7972eca02e5561a7802bf9a8918c81b7e7356fc4a012885"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.736659 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" event={"ID":"03dbd329-8b62-4fd5-8cfe-87c495680e02","Type":"ContainerStarted","Data":"457af4fc95351ae1d2b455a6f8d48bea7eccb617a94eba0b1c56b2caa1cdcb92"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.737916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" event={"ID":"59a3d8d6-d2cf-48ff-852f-de1f2f0de439","Type":"ContainerStarted","Data":"3dc5f553523f758cce52e2b0a94af3d71d831c1a44b1b7c4afcd13cd087b45b0"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.739256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" event={"ID":"35250086-d3b8-4f83-a232-aba1a9d09bb2","Type":"ContainerStarted","Data":"34cbb44e90d0f05e1b4b72710510dc263814fb790e9b1142d3c3881ab1a724e3"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.741105 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" event={"ID":"930152c4-9e5c-47e6-8c3b-46678c063e8f","Type":"ContainerStarted","Data":"e43c53a0c781f3bb22fdd41771597f14694ce75153147046e29b5b37e85c15df"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.741163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" event={"ID":"930152c4-9e5c-47e6-8c3b-46678c063e8f","Type":"ContainerStarted","Data":"0a3e288b0162f5f7713686d41c9763a1003fd0ff12a3ac5c391a4a09a9e40e78"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.742579 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" event={"ID":"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f","Type":"ContainerStarted","Data":"6c0f76cb3833fd9a6a3f9515d837d80a5e20c9f45a6b22637560396433bbde41"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.742620 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" event={"ID":"a2d1bd43-b1f2-45bf-abfc-9e43609ee07f","Type":"ContainerStarted","Data":"55b6840118444fd44efa8ff03de98db25ffbed59fec293a35419553750a6d412"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.742661 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.743986 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" event={"ID":"08265d42-a708-4bf9-9e5c-a791becc2aa5","Type":"ContainerStarted","Data":"8163c471c46033057df9bfb4ef7c1caf3a48f9006cb9b22a94871c71f6a27165"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.745021 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" event={"ID":"a23229ef-e215-4e9f-a8e0-d38be72aef90","Type":"ContainerStarted","Data":"f68fded35ba768785625cca84252cc4ec071b66b09c388860c5620b973bf2eda"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.746621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rml2b" event={"ID":"d30f6634-bff6-4b14-a07c-752377452b53","Type":"ContainerStarted","Data":"bd09565ba03b0dea89211a49e09502a89bc5b58b5ee96d59b8ea38aa3c1c7f7d"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.746716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rml2b" event={"ID":"d30f6634-bff6-4b14-a07c-752377452b53","Type":"ContainerStarted","Data":"103f0129923ccbc965fd9bd703aaef49387703170b28c938a708c2eb55ee9a5b"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.746797 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.748601 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" event={"ID":"e04c1c07-99b1-4354-8f39-a16776c388aa","Type":"ContainerStarted","Data":"15ec6ee7c2d6ebdda56530b5bd816b2f299388e23ffbb2484042ac2e0f0c95d2"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.748810 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.750550 4760 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j7rdl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.750686 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" podUID="e04c1c07-99b1-4354-8f39-a16776c388aa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.751373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" event={"ID":"5916acf0-507c-45cd-84e7-7f70a0c8d0a4","Type":"ContainerStarted","Data":"380859f12b629de3ea90a457baef38982b7013dc70a232c71458581915e0329a"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.752588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf8bv" event={"ID":"3acc0e9c-36be-4834-8450-d68aec396f24","Type":"ContainerStarted","Data":"32f2ee66f64dbdb70f8ceca9db6b08c2637a3e24cee90738e5d03adec97a6e61"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.754090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" event={"ID":"d15298c9-07f3-469c-a03d-007cc07146e1","Type":"ContainerStarted","Data":"16c4c8bcfd54afeba4c7b3dda4049bdb7312d30baf883e5be360170dd5d3dbd2"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.755580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" event={"ID":"f9d4d34a-f5d6-425f-bb81-bad575d7178c","Type":"ContainerStarted","Data":"70e00ebfd25b59caad2b1b39422a206bf55b696da488c22d034fa57d5276c9df"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.757596 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bpqx2" event={"ID":"5d0f456a-ead3-4fc9-8532-46d629ebb86a","Type":"ContainerStarted","Data":"01f56905eb19bf119b4a64bc1c429686dc2291c80fbd9353b11c0dfdd50712a9"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.759451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" event={"ID":"a8ee6c92-f652-4da5-8291-f3fedd05be84","Type":"ContainerStarted","Data":"1c89da3eb7ffba9081d48be346e78683c52567373e862746bd32530c7dd3f657"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.760800 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gpwlx" event={"ID":"a4358264-0e5b-4c15-b34b-c65740995ec0","Type":"ContainerStarted","Data":"af78e041b0198a3d09d5dcfa8b2cc244bd30a85e1c0f787cba325fc880641fb4"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.762133 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" event={"ID":"436b3b5b-76e0-416d-8f55-de0bb312f46d","Type":"ContainerStarted","Data":"1493b52881df09828efd27419e01e4813f0884374506ed09fc9a5b4a66d4a8c3"} Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.762821 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-km6r5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.762859 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.762976 4760 patch_prober.go:28] interesting pod/console-operator-58897d9998-46x6w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.763023 4760 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-h4x6x container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.763061 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" podUID="64ce0c87-2515-445e-ad80-e95bae36bfd0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.763078 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-pvjn5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.763020 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-46x6w" podUID="992a574c-b7d7-467f-be0a-98be57052cb6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.763096 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pvjn5" podUID="f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.771185 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gx2mn" podStartSLOduration=131.771165644 podStartE2EDuration="2m11.771165644s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.770170185 +0000 UTC m=+156.479200990" watchObservedRunningTime="2025-11-25 08:13:42.771165644 +0000 UTC m=+156.480196439" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.774900 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.776389 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.776465 4760 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9dz6w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.776609 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" podUID="930152c4-9e5c-47e6-8c3b-46678c063e8f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.792354 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.793219 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.293199501 +0000 UTC m=+157.002230296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.850890 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" podStartSLOduration=131.850871174 podStartE2EDuration="2m11.850871174s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.849672168 +0000 UTC m=+156.558702973" watchObservedRunningTime="2025-11-25 08:13:42.850871174 +0000 UTC m=+156.559901969" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.851328 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4p4tt" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.851710 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2m676" podStartSLOduration=131.851703128 podStartE2EDuration="2m11.851703128s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.798048453 +0000 UTC m=+156.507079248" watchObservedRunningTime="2025-11-25 08:13:42.851703128 +0000 UTC m=+156.560733923" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.880433 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jsphj" podStartSLOduration=131.880413191 podStartE2EDuration="2m11.880413191s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.878302949 +0000 UTC m=+156.587333764" watchObservedRunningTime="2025-11-25 08:13:42.880413191 +0000 UTC m=+156.589443986" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.896225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:42 crc kubenswrapper[4760]: E1125 08:13:42.901174 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.40115854 +0000 UTC m=+157.110189415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.916350 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.918268 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.919143 4760 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-68hpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.919317 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" podUID="a8ee6c92-f652-4da5-8291-f3fedd05be84" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.924625 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" podStartSLOduration=131.924607078 podStartE2EDuration="2m11.924607078s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.917664494 +0000 UTC m=+156.626695289" watchObservedRunningTime="2025-11-25 08:13:42.924607078 +0000 UTC m=+156.633637873" Nov 25 08:13:42 crc kubenswrapper[4760]: I1125 08:13:42.966984 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gpwlx" podStartSLOduration=7.966966732 podStartE2EDuration="7.966966732s" podCreationTimestamp="2025-11-25 08:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:42.965709955 +0000 UTC m=+156.674740760" watchObservedRunningTime="2025-11-25 08:13:42.966966732 +0000 UTC m=+156.675997527" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.002693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.002867 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" podStartSLOduration=132.002851405 podStartE2EDuration="2m12.002851405s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.001445004 +0000 UTC m=+156.710475819" watchObservedRunningTime="2025-11-25 08:13:43.002851405 +0000 UTC m=+156.711882200" Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.003028 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.502999249 +0000 UTC m=+157.212030044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.065705 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8rhtx" podStartSLOduration=132.065673529 podStartE2EDuration="2m12.065673529s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.065087432 +0000 UTC m=+156.774118227" watchObservedRunningTime="2025-11-25 08:13:43.065673529 +0000 UTC m=+156.774704324" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.067911 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" podStartSLOduration=132.067880384 podStartE2EDuration="2m12.067880384s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.029335352 +0000 UTC m=+156.738366157" watchObservedRunningTime="2025-11-25 08:13:43.067880384 +0000 UTC m=+156.776911179" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.107042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.107570 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.607556138 +0000 UTC m=+157.316586933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.127992 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xbx5c" podStartSLOduration=132.127975368 podStartE2EDuration="2m12.127975368s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.124057203 +0000 UTC m=+156.833087998" watchObservedRunningTime="2025-11-25 08:13:43.127975368 +0000 UTC m=+156.837006163" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.180474 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hrhbl" podStartSLOduration=132.180456608 podStartE2EDuration="2m12.180456608s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.1791391 +0000 UTC m=+156.888169915" watchObservedRunningTime="2025-11-25 08:13:43.180456608 +0000 UTC m=+156.889487403" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.208056 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.208389 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.708368508 +0000 UTC m=+157.417399303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.216715 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-grp6l" podStartSLOduration=132.216701192 podStartE2EDuration="2m12.216701192s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.216146626 +0000 UTC m=+156.925177431" watchObservedRunningTime="2025-11-25 08:13:43.216701192 +0000 UTC m=+156.925731987" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.255897 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g6nh" podStartSLOduration=132.255878512 podStartE2EDuration="2m12.255878512s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.253641986 +0000 UTC m=+156.962672791" watchObservedRunningTime="2025-11-25 08:13:43.255878512 +0000 UTC m=+156.964909297" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.310589 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.310997 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.81098113 +0000 UTC m=+157.520011925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.341989 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:43 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:43 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:43 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.342052 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.367836 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" podStartSLOduration=132.367812938 podStartE2EDuration="2m12.367812938s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.310170366 +0000 UTC m=+157.019201161" watchObservedRunningTime="2025-11-25 08:13:43.367812938 +0000 UTC m=+157.076843743" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.371007 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rml2b" podStartSLOduration=8.37096671 podStartE2EDuration="8.37096671s" podCreationTimestamp="2025-11-25 08:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:43.362843052 +0000 UTC m=+157.071873847" watchObservedRunningTime="2025-11-25 08:13:43.37096671 +0000 UTC m=+157.079997505" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.412597 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.413315 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:43.913295293 +0000 UTC m=+157.622326088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.514749 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.515116 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.015098391 +0000 UTC m=+157.724129186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.616018 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.616190 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.116162938 +0000 UTC m=+157.825193733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.616255 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.616741 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.116726244 +0000 UTC m=+157.825757109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.717380 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.717569 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.217538814 +0000 UTC m=+157.926569609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.717730 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.718151 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.218136971 +0000 UTC m=+157.927167766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.770456 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" event={"ID":"35250086-d3b8-4f83-a232-aba1a9d09bb2","Type":"ContainerStarted","Data":"5efcc406ca4f5773fdb7b77f5bf4b332b1924cbc385b1339a9f3d2160b5d5e61"} Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.771653 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-km6r5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.771694 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.771658 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-pvjn5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.771923 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pvjn5" podUID="f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.818970 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.819346 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.319325951 +0000 UTC m=+158.028356746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:43 crc kubenswrapper[4760]: I1125 08:13:43.920587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:43 crc kubenswrapper[4760]: E1125 08:13:43.924179 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.424163179 +0000 UTC m=+158.133194074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.003370 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl5fp" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.022099 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.022792 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.522772373 +0000 UTC m=+158.231803158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.124159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.124538 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.62452264 +0000 UTC m=+158.333553445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.227589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.227793 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.727760859 +0000 UTC m=+158.436791654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.227881 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.228218 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.728204762 +0000 UTC m=+158.437235557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.328713 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.328984 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.82895166 +0000 UTC m=+158.537982455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.329398 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.329789 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.829775024 +0000 UTC m=+158.538805829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.341132 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:44 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:44 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:44 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.341204 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.411847 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qz6d4"] Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.413112 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.426332 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz6d4"] Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.426801 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.430677 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.430805 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.930784079 +0000 UTC m=+158.639814874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.431407 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.431737 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:44.931723947 +0000 UTC m=+158.640754742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.532391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.532611 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-utilities\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.532652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-catalog-content\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.532722 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk52j\" (UniqueName: \"kubernetes.io/projected/50b275d2-6236-4076-95b0-f2fab18a38f9-kube-api-access-gk52j\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.532825 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:45.032808784 +0000 UTC m=+158.741839579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.614750 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v89q8"] Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.615778 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.618651 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.634352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk52j\" (UniqueName: \"kubernetes.io/projected/50b275d2-6236-4076-95b0-f2fab18a38f9-kube-api-access-gk52j\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.634404 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.634447 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-utilities\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.634489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-catalog-content\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.634939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-catalog-content\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.634952 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:45.134935591 +0000 UTC m=+158.843966386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.635148 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-utilities\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.645010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v89q8"] Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.686972 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk52j\" (UniqueName: \"kubernetes.io/projected/50b275d2-6236-4076-95b0-f2fab18a38f9-kube-api-access-gk52j\") pod \"certified-operators-qz6d4\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.725056 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.735485 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.735771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-catalog-content\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.735863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-utilities\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.735921 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsh6\" (UniqueName: \"kubernetes.io/projected/dce383dd-3389-41fe-9223-ed5911c789fa-kube-api-access-fwsh6\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.736061 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:45.236038479 +0000 UTC m=+158.945069274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.771662 4760 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j7rdl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.772023 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" podUID="e04c1c07-99b1-4354-8f39-a16776c388aa" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.787896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" event={"ID":"35250086-d3b8-4f83-a232-aba1a9d09bb2","Type":"ContainerStarted","Data":"1be99414a1e17b0e051000dc8b2b7bed749b39c9bc1aee8b864ecae6d6d7e75b"} Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.787938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" event={"ID":"35250086-d3b8-4f83-a232-aba1a9d09bb2","Type":"ContainerStarted","Data":"d889d9798ab4a9277e1858fed8edf8154bac1d400f81ab8db4d61e4049cc2431"} Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.815991 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h8svr" podStartSLOduration=9.815972196 podStartE2EDuration="9.815972196s" podCreationTimestamp="2025-11-25 08:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:44.813481253 +0000 UTC m=+158.522512058" watchObservedRunningTime="2025-11-25 08:13:44.815972196 +0000 UTC m=+158.525002991" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.821317 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2frtd"] Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.828067 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.836924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-utilities\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.836984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.837021 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsh6\" (UniqueName: \"kubernetes.io/projected/dce383dd-3389-41fe-9223-ed5911c789fa-kube-api-access-fwsh6\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.837068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-catalog-content\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.837559 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-catalog-content\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.837615 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:45.3375985 +0000 UTC m=+159.046629285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.838109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-utilities\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.848286 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2frtd"] Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.862730 4760 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.872073 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsh6\" (UniqueName: \"kubernetes.io/projected/dce383dd-3389-41fe-9223-ed5911c789fa-kube-api-access-fwsh6\") pod \"community-operators-v89q8\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.930690 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.943149 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.943432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-utilities\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.943530 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsk2s\" (UniqueName: \"kubernetes.io/projected/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-kube-api-access-jsk2s\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:44 crc kubenswrapper[4760]: I1125 08:13:44.943679 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-catalog-content\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:44 crc kubenswrapper[4760]: E1125 08:13:44.944601 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 08:13:45.444578781 +0000 UTC m=+159.153609576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.016029 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qxnxz"] Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.017592 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.026220 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxnxz"] Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.044689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsk2s\" (UniqueName: \"kubernetes.io/projected/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-kube-api-access-jsk2s\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.044747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.044780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-catalog-content\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.045552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-utilities\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.046291 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-utilities\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:45 crc kubenswrapper[4760]: E1125 08:13:45.046714 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 08:13:45.546698318 +0000 UTC m=+159.255729113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fcw7b" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.047191 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-catalog-content\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.089347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsk2s\" (UniqueName: \"kubernetes.io/projected/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-kube-api-access-jsk2s\") pod \"certified-operators-2frtd\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.127301 4760 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T08:13:44.862754329Z","Handler":null,"Name":""} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.137480 4760 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.137527 4760 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.148173 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.148540 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-catalog-content\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.148635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-utilities\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.148723 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2848t\" (UniqueName: \"kubernetes.io/projected/24249aa7-95c1-4bc5-8197-55975e7a49eb-kube-api-access-2848t\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.149000 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.155541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.172843 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz6d4"] Nov 25 08:13:45 crc kubenswrapper[4760]: W1125 08:13:45.181286 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50b275d2_6236_4076_95b0_f2fab18a38f9.slice/crio-eb918614b7629bcc7e253811f28ed455ad937d8714b346d5eeb115c1b6d4e656 WatchSource:0}: Error finding container eb918614b7629bcc7e253811f28ed455ad937d8714b346d5eeb115c1b6d4e656: Status 404 returned error can't find the container with id eb918614b7629bcc7e253811f28ed455ad937d8714b346d5eeb115c1b6d4e656 Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.254862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-catalog-content\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.254920 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-utilities\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.254982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.255025 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2848t\" (UniqueName: \"kubernetes.io/projected/24249aa7-95c1-4bc5-8197-55975e7a49eb-kube-api-access-2848t\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.255735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-catalog-content\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.255960 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-utilities\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.291462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2848t\" (UniqueName: \"kubernetes.io/projected/24249aa7-95c1-4bc5-8197-55975e7a49eb-kube-api-access-2848t\") pod \"community-operators-qxnxz\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.337946 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.345990 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:45 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:45 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:45 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.346356 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.370668 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.370734 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.404952 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fcw7b\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.427427 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v89q8"] Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.602420 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2frtd"] Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.667448 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.714962 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxnxz"] Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.793058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxnxz" event={"ID":"24249aa7-95c1-4bc5-8197-55975e7a49eb","Type":"ContainerStarted","Data":"41ebe2ba87724e7d02a0d451592446636ad3085b97d8bbbe0b05946db82e139c"} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.795204 4760 generic.go:334] "Generic (PLEG): container finished" podID="dce383dd-3389-41fe-9223-ed5911c789fa" containerID="b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22" exitCode=0 Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.795280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v89q8" event={"ID":"dce383dd-3389-41fe-9223-ed5911c789fa","Type":"ContainerDied","Data":"b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22"} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.795304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v89q8" event={"ID":"dce383dd-3389-41fe-9223-ed5911c789fa","Type":"ContainerStarted","Data":"f34671d567f6ff0d8d538539913d0e94ed9a05d154f429baff7d24e6940b4ec8"} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.797734 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.799735 4760 generic.go:334] "Generic (PLEG): container finished" podID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerID="25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300" exitCode=0 Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.799804 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz6d4" event={"ID":"50b275d2-6236-4076-95b0-f2fab18a38f9","Type":"ContainerDied","Data":"25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300"} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.799830 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz6d4" event={"ID":"50b275d2-6236-4076-95b0-f2fab18a38f9","Type":"ContainerStarted","Data":"eb918614b7629bcc7e253811f28ed455ad937d8714b346d5eeb115c1b6d4e656"} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.803295 4760 generic.go:334] "Generic (PLEG): container finished" podID="a23229ef-e215-4e9f-a8e0-d38be72aef90" containerID="f68fded35ba768785625cca84252cc4ec071b66b09c388860c5620b973bf2eda" exitCode=0 Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.803334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" event={"ID":"a23229ef-e215-4e9f-a8e0-d38be72aef90","Type":"ContainerDied","Data":"f68fded35ba768785625cca84252cc4ec071b66b09c388860c5620b973bf2eda"} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.805491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2frtd" event={"ID":"7e7d0153-ea78-4fd4-905a-0bd7fae6401f","Type":"ContainerStarted","Data":"92f68487da188224702f523c7c05ba7e69f6db64e022d5a695cac8ca08047764"} Nov 25 08:13:45 crc kubenswrapper[4760]: I1125 08:13:45.896578 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fcw7b"] Nov 25 08:13:45 crc kubenswrapper[4760]: W1125 08:13:45.909854 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod584213d2_6225_4cab_b558_22d0b9990cd8.slice/crio-af99955854d2e6f4f5fce89c965c8b5552c178bb962059cb4d5b37f5308f23f1 WatchSource:0}: Error finding container af99955854d2e6f4f5fce89c965c8b5552c178bb962059cb4d5b37f5308f23f1: Status 404 returned error can't find the container with id af99955854d2e6f4f5fce89c965c8b5552c178bb962059cb4d5b37f5308f23f1 Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.342818 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:46 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:46 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:46 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.343143 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.657720 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmt2d"] Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.658919 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.660972 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.669749 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmt2d"] Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.787446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6q5\" (UniqueName: \"kubernetes.io/projected/28856d66-d950-40a5-986c-0e3b0aa16949-kube-api-access-wm6q5\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.787559 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-catalog-content\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.787671 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-utilities\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.811804 4760 generic.go:334] "Generic (PLEG): container finished" podID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerID="cd99c9a530d8cf9d7fb8fd782cab216f788be53b7d5253abd0c9feb62f49df1f" exitCode=0 Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.811882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxnxz" event={"ID":"24249aa7-95c1-4bc5-8197-55975e7a49eb","Type":"ContainerDied","Data":"cd99c9a530d8cf9d7fb8fd782cab216f788be53b7d5253abd0c9feb62f49df1f"} Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.817320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" event={"ID":"584213d2-6225-4cab-b558-22d0b9990cd8","Type":"ContainerStarted","Data":"b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c"} Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.817361 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" event={"ID":"584213d2-6225-4cab-b558-22d0b9990cd8","Type":"ContainerStarted","Data":"af99955854d2e6f4f5fce89c965c8b5552c178bb962059cb4d5b37f5308f23f1"} Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.817871 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.819709 4760 generic.go:334] "Generic (PLEG): container finished" podID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerID="cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164" exitCode=0 Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.820173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2frtd" event={"ID":"7e7d0153-ea78-4fd4-905a-0bd7fae6401f","Type":"ContainerDied","Data":"cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164"} Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.865351 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" podStartSLOduration=135.865327452 podStartE2EDuration="2m15.865327452s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:46.861321465 +0000 UTC m=+160.570352290" watchObservedRunningTime="2025-11-25 08:13:46.865327452 +0000 UTC m=+160.574358247" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.894079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6q5\" (UniqueName: \"kubernetes.io/projected/28856d66-d950-40a5-986c-0e3b0aa16949-kube-api-access-wm6q5\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.894231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-catalog-content\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.894313 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-utilities\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.894799 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-catalog-content\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.894843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-utilities\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.930819 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.931724 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.933552 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.934272 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.934406 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6q5\" (UniqueName: \"kubernetes.io/projected/28856d66-d950-40a5-986c-0e3b0aa16949-kube-api-access-wm6q5\") pod \"redhat-marketplace-vmt2d\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.974828 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 08:13:46 crc kubenswrapper[4760]: I1125 08:13:46.975531 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.039327 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kz8md"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.041988 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.043815 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.050948 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz8md"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.099263 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.099327 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhd7\" (UniqueName: \"kubernetes.io/projected/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-kube-api-access-mnhd7\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.099378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-utilities\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.099408 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.099437 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-catalog-content\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.177778 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.200320 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-catalog-content\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.200372 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.200414 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnhd7\" (UniqueName: \"kubernetes.io/projected/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-kube-api-access-mnhd7\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.200458 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-utilities\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.200490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.200545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.200918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-catalog-content\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.201183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-utilities\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.227832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.228030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnhd7\" (UniqueName: \"kubernetes.io/projected/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-kube-api-access-mnhd7\") pod \"redhat-marketplace-kz8md\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.270496 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.291977 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 08:13:47 crc kubenswrapper[4760]: E1125 08:13:47.292268 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23229ef-e215-4e9f-a8e0-d38be72aef90" containerName="collect-profiles" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.292280 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23229ef-e215-4e9f-a8e0-d38be72aef90" containerName="collect-profiles" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.292390 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23229ef-e215-4e9f-a8e0-d38be72aef90" containerName="collect-profiles" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.295003 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.297491 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.303914 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.304145 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.304979 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23229ef-e215-4e9f-a8e0-d38be72aef90-config-volume\") pod \"a23229ef-e215-4e9f-a8e0-d38be72aef90\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.305085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a23229ef-e215-4e9f-a8e0-d38be72aef90-secret-volume\") pod \"a23229ef-e215-4e9f-a8e0-d38be72aef90\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.305130 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h4kq\" (UniqueName: \"kubernetes.io/projected/a23229ef-e215-4e9f-a8e0-d38be72aef90-kube-api-access-6h4kq\") pod \"a23229ef-e215-4e9f-a8e0-d38be72aef90\" (UID: \"a23229ef-e215-4e9f-a8e0-d38be72aef90\") " Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.305886 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23229ef-e215-4e9f-a8e0-d38be72aef90-config-volume" (OuterVolumeSpecName: "config-volume") pod "a23229ef-e215-4e9f-a8e0-d38be72aef90" (UID: "a23229ef-e215-4e9f-a8e0-d38be72aef90"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.315354 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23229ef-e215-4e9f-a8e0-d38be72aef90-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a23229ef-e215-4e9f-a8e0-d38be72aef90" (UID: "a23229ef-e215-4e9f-a8e0-d38be72aef90"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.331482 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23229ef-e215-4e9f-a8e0-d38be72aef90-kube-api-access-6h4kq" (OuterVolumeSpecName: "kube-api-access-6h4kq") pod "a23229ef-e215-4e9f-a8e0-d38be72aef90" (UID: "a23229ef-e215-4e9f-a8e0-d38be72aef90"). InnerVolumeSpecName "kube-api-access-6h4kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.339108 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:47 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:47 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:47 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.339160 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.374901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.402892 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmt2d"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.406160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50cb0846-7f5a-414c-9dc6-f00b403cad33-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.406204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50cb0846-7f5a-414c-9dc6-f00b403cad33-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.406311 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a23229ef-e215-4e9f-a8e0-d38be72aef90-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.406327 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h4kq\" (UniqueName: \"kubernetes.io/projected/a23229ef-e215-4e9f-a8e0-d38be72aef90-kube-api-access-6h4kq\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.406339 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23229ef-e215-4e9f-a8e0-d38be72aef90-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:47 crc kubenswrapper[4760]: W1125 08:13:47.416638 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28856d66_d950_40a5_986c_0e3b0aa16949.slice/crio-d5350fe477d99d69270bbea8921d5c3b104db39aa91300050319f68f551f3505 WatchSource:0}: Error finding container d5350fe477d99d69270bbea8921d5c3b104db39aa91300050319f68f551f3505: Status 404 returned error can't find the container with id d5350fe477d99d69270bbea8921d5c3b104db39aa91300050319f68f551f3505 Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.502735 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.509071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50cb0846-7f5a-414c-9dc6-f00b403cad33-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.509116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50cb0846-7f5a-414c-9dc6-f00b403cad33-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.509273 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50cb0846-7f5a-414c-9dc6-f00b403cad33-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: W1125 08:13:47.522679 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod675e12e4_33ba_4bca_a1bc_28f8a95e88df.slice/crio-93c9bea0d333e9fb496a800614608c4cfa1f827f6f36f1823f76b98125781f49 WatchSource:0}: Error finding container 93c9bea0d333e9fb496a800614608c4cfa1f827f6f36f1823f76b98125781f49: Status 404 returned error can't find the container with id 93c9bea0d333e9fb496a800614608c4cfa1f827f6f36f1823f76b98125781f49 Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.531481 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50cb0846-7f5a-414c-9dc6-f00b403cad33-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.614238 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pr5fk"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.616129 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.623320 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.629751 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.641325 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pr5fk"] Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.711472 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xctnm\" (UniqueName: \"kubernetes.io/projected/139fa8a2-b6c5-4624-9003-d418fdd22d55-kube-api-access-xctnm\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.711565 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-utilities\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.711596 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-catalog-content\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.725181 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz8md"] Nov 25 08:13:47 crc kubenswrapper[4760]: W1125 08:13:47.767397 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d7c9cc_bd72_44d9_93f5_cd7475b2e17c.slice/crio-ff9e090793a7760f22d88a638731813aa4c1ae31b1c0dc95b5bad663e45774f0 WatchSource:0}: Error finding container ff9e090793a7760f22d88a638731813aa4c1ae31b1c0dc95b5bad663e45774f0: Status 404 returned error can't find the container with id ff9e090793a7760f22d88a638731813aa4c1ae31b1c0dc95b5bad663e45774f0 Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.804186 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.812829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-utilities\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.812896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-catalog-content\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.812988 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xctnm\" (UniqueName: \"kubernetes.io/projected/139fa8a2-b6c5-4624-9003-d418fdd22d55-kube-api-access-xctnm\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.813048 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9dz6w" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.813862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-utilities\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.814078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-catalog-content\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.837271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xctnm\" (UniqueName: \"kubernetes.io/projected/139fa8a2-b6c5-4624-9003-d418fdd22d55-kube-api-access-xctnm\") pod \"redhat-operators-pr5fk\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.842072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" event={"ID":"a23229ef-e215-4e9f-a8e0-d38be72aef90","Type":"ContainerDied","Data":"3c7f468f7a660ec5b7443959d1761c70bf6f211802f13c8e7ebc3fa52133118a"} Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.842119 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c7f468f7a660ec5b7443959d1761c70bf6f211802f13c8e7ebc3fa52133118a" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.842187 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.848814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz8md" event={"ID":"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c","Type":"ContainerStarted","Data":"ff9e090793a7760f22d88a638731813aa4c1ae31b1c0dc95b5bad663e45774f0"} Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.854342 4760 generic.go:334] "Generic (PLEG): container finished" podID="28856d66-d950-40a5-986c-0e3b0aa16949" containerID="22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030" exitCode=0 Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.854436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmt2d" event={"ID":"28856d66-d950-40a5-986c-0e3b0aa16949","Type":"ContainerDied","Data":"22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030"} Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.854466 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmt2d" event={"ID":"28856d66-d950-40a5-986c-0e3b0aa16949","Type":"ContainerStarted","Data":"d5350fe477d99d69270bbea8921d5c3b104db39aa91300050319f68f551f3505"} Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.864055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"675e12e4-33ba-4bca-a1bc-28f8a95e88df","Type":"ContainerStarted","Data":"93c9bea0d333e9fb496a800614608c4cfa1f827f6f36f1823f76b98125781f49"} Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.945154 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.964517 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:13:47 crc kubenswrapper[4760]: I1125 08:13:47.996505 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-68hpd" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.027103 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-pvjn5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.027158 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pvjn5" podUID="f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.027577 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-pvjn5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.027603 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pvjn5" podUID="f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.048302 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9nmxs"] Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.057633 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.096633 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nmxs"] Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.129046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h2wr\" (UniqueName: \"kubernetes.io/projected/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-kube-api-access-8h2wr\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.129454 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-catalog-content\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.129551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-utilities\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.209666 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-46x6w" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.230822 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-utilities\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.230904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h2wr\" (UniqueName: \"kubernetes.io/projected/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-kube-api-access-8h2wr\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.230924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-catalog-content\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.231529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-catalog-content\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.231600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-utilities\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.258623 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h2wr\" (UniqueName: \"kubernetes.io/projected/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-kube-api-access-8h2wr\") pod \"redhat-operators-9nmxs\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.266895 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.266969 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.282313 4760 patch_prober.go:28] interesting pod/console-f9d7485db-s4qrl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.282375 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-s4qrl" podUID="916b7590-b541-4ca9-b432-861731b7ae94" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.336072 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.340239 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:48 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Nov 25 08:13:48 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:48 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.340316 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.406218 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.424426 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h4x6x" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.442088 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.543746 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.678835 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pr5fk"] Nov 25 08:13:48 crc kubenswrapper[4760]: W1125 08:13:48.702128 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod139fa8a2_b6c5_4624_9003_d418fdd22d55.slice/crio-9c2dd9ae3c78432fe9802e3a685529206ae85515e3dad135705b4891f3e7b3ea WatchSource:0}: Error finding container 9c2dd9ae3c78432fe9802e3a685529206ae85515e3dad135705b4891f3e7b3ea: Status 404 returned error can't find the container with id 9c2dd9ae3c78432fe9802e3a685529206ae85515e3dad135705b4891f3e7b3ea Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.800855 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j7rdl" Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.896890 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr5fk" event={"ID":"139fa8a2-b6c5-4624-9003-d418fdd22d55","Type":"ContainerStarted","Data":"9c2dd9ae3c78432fe9802e3a685529206ae85515e3dad135705b4891f3e7b3ea"} Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.899152 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50cb0846-7f5a-414c-9dc6-f00b403cad33","Type":"ContainerStarted","Data":"aa383441a4f7e401ae08aa8b6c32734ff13a333e3a79a8de825ec9f148b3cfae"} Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.901062 4760 generic.go:334] "Generic (PLEG): container finished" podID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerID="4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139" exitCode=0 Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.901132 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz8md" event={"ID":"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c","Type":"ContainerDied","Data":"4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139"} Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.905848 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"675e12e4-33ba-4bca-a1bc-28f8a95e88df","Type":"ContainerStarted","Data":"f9d0c3c4946333fabbbd95d168050472618e8fefc95ee13b4d78eca2b7e92fb3"} Nov 25 08:13:48 crc kubenswrapper[4760]: I1125 08:13:48.975633 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.975606796 podStartE2EDuration="2.975606796s" podCreationTimestamp="2025-11-25 08:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:48.966647193 +0000 UTC m=+162.675678008" watchObservedRunningTime="2025-11-25 08:13:48.975606796 +0000 UTC m=+162.684637611" Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.092497 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nmxs"] Nov 25 08:13:49 crc kubenswrapper[4760]: W1125 08:13:49.146395 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7d632dc_5fc1_4021_a4bc_366e2a89ea52.slice/crio-5d22cd4ee7fb4019b5e5251d9cce3ffe7b42ccef9a8b7b66ce22c78a1d3d401b WatchSource:0}: Error finding container 5d22cd4ee7fb4019b5e5251d9cce3ffe7b42ccef9a8b7b66ce22c78a1d3d401b: Status 404 returned error can't find the container with id 5d22cd4ee7fb4019b5e5251d9cce3ffe7b42ccef9a8b7b66ce22c78a1d3d401b Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.355173 4760 patch_prober.go:28] interesting pod/router-default-5444994796-jw9hf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 08:13:49 crc kubenswrapper[4760]: [+]has-synced ok Nov 25 08:13:49 crc kubenswrapper[4760]: [+]process-running ok Nov 25 08:13:49 crc kubenswrapper[4760]: healthz check failed Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.355383 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jw9hf" podUID="7b95137c-8f1b-4e15-8ae2-4c6192118119" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.945051 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50cb0846-7f5a-414c-9dc6-f00b403cad33","Type":"ContainerStarted","Data":"fbb7d19eac2695b942fd53b26bd4fbdb1ddf1951f9603ecfa98a60527e3d0a74"} Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.967911 4760 generic.go:334] "Generic (PLEG): container finished" podID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerID="3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa" exitCode=0 Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.968173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmxs" event={"ID":"d7d632dc-5fc1-4021-a4bc-366e2a89ea52","Type":"ContainerDied","Data":"3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa"} Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.968234 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmxs" event={"ID":"d7d632dc-5fc1-4021-a4bc-366e2a89ea52","Type":"ContainerStarted","Data":"5d22cd4ee7fb4019b5e5251d9cce3ffe7b42ccef9a8b7b66ce22c78a1d3d401b"} Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.994618 4760 generic.go:334] "Generic (PLEG): container finished" podID="675e12e4-33ba-4bca-a1bc-28f8a95e88df" containerID="f9d0c3c4946333fabbbd95d168050472618e8fefc95ee13b4d78eca2b7e92fb3" exitCode=0 Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.994739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"675e12e4-33ba-4bca-a1bc-28f8a95e88df","Type":"ContainerDied","Data":"f9d0c3c4946333fabbbd95d168050472618e8fefc95ee13b4d78eca2b7e92fb3"} Nov 25 08:13:49 crc kubenswrapper[4760]: I1125 08:13:49.998508 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.9984917920000003 podStartE2EDuration="2.998491792s" podCreationTimestamp="2025-11-25 08:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:49.963768852 +0000 UTC m=+163.672799657" watchObservedRunningTime="2025-11-25 08:13:49.998491792 +0000 UTC m=+163.707522587" Nov 25 08:13:50 crc kubenswrapper[4760]: I1125 08:13:50.015578 4760 generic.go:334] "Generic (PLEG): container finished" podID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerID="62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991" exitCode=0 Nov 25 08:13:50 crc kubenswrapper[4760]: I1125 08:13:50.015628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr5fk" event={"ID":"139fa8a2-b6c5-4624-9003-d418fdd22d55","Type":"ContainerDied","Data":"62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991"} Nov 25 08:13:50 crc kubenswrapper[4760]: I1125 08:13:50.343271 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:50 crc kubenswrapper[4760]: I1125 08:13:50.352792 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jw9hf" Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.033295 4760 generic.go:334] "Generic (PLEG): container finished" podID="50cb0846-7f5a-414c-9dc6-f00b403cad33" containerID="fbb7d19eac2695b942fd53b26bd4fbdb1ddf1951f9603ecfa98a60527e3d0a74" exitCode=0 Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.034186 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50cb0846-7f5a-414c-9dc6-f00b403cad33","Type":"ContainerDied","Data":"fbb7d19eac2695b942fd53b26bd4fbdb1ddf1951f9603ecfa98a60527e3d0a74"} Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.505868 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.653567 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kube-api-access\") pod \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.653680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kubelet-dir\") pod \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\" (UID: \"675e12e4-33ba-4bca-a1bc-28f8a95e88df\") " Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.653774 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "675e12e4-33ba-4bca-a1bc-28f8a95e88df" (UID: "675e12e4-33ba-4bca-a1bc-28f8a95e88df"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.654074 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.661394 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "675e12e4-33ba-4bca-a1bc-28f8a95e88df" (UID: "675e12e4-33ba-4bca-a1bc-28f8a95e88df"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:13:51 crc kubenswrapper[4760]: I1125 08:13:51.764044 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/675e12e4-33ba-4bca-a1bc-28f8a95e88df-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.073281 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"675e12e4-33ba-4bca-a1bc-28f8a95e88df","Type":"ContainerDied","Data":"93c9bea0d333e9fb496a800614608c4cfa1f827f6f36f1823f76b98125781f49"} Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.073326 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93c9bea0d333e9fb496a800614608c4cfa1f827f6f36f1823f76b98125781f49" Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.073391 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.350830 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.492736 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50cb0846-7f5a-414c-9dc6-f00b403cad33-kube-api-access\") pod \"50cb0846-7f5a-414c-9dc6-f00b403cad33\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.492817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50cb0846-7f5a-414c-9dc6-f00b403cad33-kubelet-dir\") pod \"50cb0846-7f5a-414c-9dc6-f00b403cad33\" (UID: \"50cb0846-7f5a-414c-9dc6-f00b403cad33\") " Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.493011 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50cb0846-7f5a-414c-9dc6-f00b403cad33-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50cb0846-7f5a-414c-9dc6-f00b403cad33" (UID: "50cb0846-7f5a-414c-9dc6-f00b403cad33"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.493336 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50cb0846-7f5a-414c-9dc6-f00b403cad33-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.514785 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cb0846-7f5a-414c-9dc6-f00b403cad33-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50cb0846-7f5a-414c-9dc6-f00b403cad33" (UID: "50cb0846-7f5a-414c-9dc6-f00b403cad33"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:13:52 crc kubenswrapper[4760]: I1125 08:13:52.595572 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50cb0846-7f5a-414c-9dc6-f00b403cad33-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:53 crc kubenswrapper[4760]: I1125 08:13:53.088779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50cb0846-7f5a-414c-9dc6-f00b403cad33","Type":"ContainerDied","Data":"aa383441a4f7e401ae08aa8b6c32734ff13a333e3a79a8de825ec9f148b3cfae"} Nov 25 08:13:53 crc kubenswrapper[4760]: I1125 08:13:53.089189 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa383441a4f7e401ae08aa8b6c32734ff13a333e3a79a8de825ec9f148b3cfae" Nov 25 08:13:53 crc kubenswrapper[4760]: I1125 08:13:53.089087 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 08:13:53 crc kubenswrapper[4760]: I1125 08:13:53.527157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:53 crc kubenswrapper[4760]: I1125 08:13:53.538400 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/deaf3f00-2bbd-4217-9414-5a6759e72b60-metrics-certs\") pod \"network-metrics-daemon-v2qd9\" (UID: \"deaf3f00-2bbd-4217-9414-5a6759e72b60\") " pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:53 crc kubenswrapper[4760]: I1125 08:13:53.572050 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rml2b" Nov 25 08:13:53 crc kubenswrapper[4760]: I1125 08:13:53.782068 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v2qd9" Nov 25 08:13:57 crc kubenswrapper[4760]: I1125 08:13:57.998301 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-pvjn5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 08:13:58 crc kubenswrapper[4760]: I1125 08:13:57.998420 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-pvjn5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 25 08:13:58 crc kubenswrapper[4760]: I1125 08:13:57.998736 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pvjn5" podUID="f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 08:13:58 crc kubenswrapper[4760]: I1125 08:13:57.998638 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pvjn5" podUID="f2cc81f0-c0f7-4869-b6ec-5d4f9d7c3945" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 25 08:13:58 crc kubenswrapper[4760]: I1125 08:13:58.267395 4760 patch_prober.go:28] interesting pod/console-f9d7485db-s4qrl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 25 08:13:58 crc kubenswrapper[4760]: I1125 08:13:58.267743 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-s4qrl" podUID="916b7590-b541-4ca9-b432-861731b7ae94" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 25 08:14:01 crc kubenswrapper[4760]: I1125 08:14:01.746298 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:14:01 crc kubenswrapper[4760]: I1125 08:14:01.746360 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:14:04 crc kubenswrapper[4760]: I1125 08:14:04.282958 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 08:14:05 crc kubenswrapper[4760]: I1125 08:14:05.673165 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:14:08 crc kubenswrapper[4760]: I1125 08:14:08.028573 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-pvjn5" Nov 25 08:14:08 crc kubenswrapper[4760]: I1125 08:14:08.271673 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:14:08 crc kubenswrapper[4760]: I1125 08:14:08.275111 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:14:18 crc kubenswrapper[4760]: E1125 08:14:18.266159 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 08:14:18 crc kubenswrapper[4760]: E1125 08:14:18.266806 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wm6q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vmt2d_openshift-marketplace(28856d66-d950-40a5-986c-0e3b0aa16949): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 08:14:18 crc kubenswrapper[4760]: E1125 08:14:18.270586 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vmt2d" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" Nov 25 08:14:18 crc kubenswrapper[4760]: I1125 08:14:18.740601 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tdrjl" Nov 25 08:14:21 crc kubenswrapper[4760]: E1125 08:14:21.819037 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vmt2d" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.015112 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.015628 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gk52j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qz6d4_openshift-marketplace(50b275d2-6236-4076-95b0-f2fab18a38f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.017461 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qz6d4" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.087730 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.087883 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsk2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2frtd_openshift-marketplace(7e7d0153-ea78-4fd4-905a-0bd7fae6401f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.089117 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2frtd" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.100534 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.100671 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h2wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9nmxs_openshift-marketplace(d7d632dc-5fc1-4021-a4bc-366e2a89ea52): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 08:14:23 crc kubenswrapper[4760]: E1125 08:14:23.101848 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9nmxs" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.622996 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9nmxs" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.623044 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2frtd" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.623083 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qz6d4" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.703022 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.703416 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwsh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-v89q8_openshift-marketplace(dce383dd-3389-41fe-9223-ed5911c789fa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.704626 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-v89q8" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.731127 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.731353 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2848t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qxnxz_openshift-marketplace(24249aa7-95c1-4bc5-8197-55975e7a49eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 08:14:24 crc kubenswrapper[4760]: E1125 08:14:24.733215 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qxnxz" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" Nov 25 08:14:25 crc kubenswrapper[4760]: I1125 08:14:25.043660 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v2qd9"] Nov 25 08:14:25 crc kubenswrapper[4760]: W1125 08:14:25.081275 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddeaf3f00_2bbd_4217_9414_5a6759e72b60.slice/crio-d0324013db2c1459693b3fbd72304f0752df57a7cd33d25727c3ed30f22c49cd WatchSource:0}: Error finding container d0324013db2c1459693b3fbd72304f0752df57a7cd33d25727c3ed30f22c49cd: Status 404 returned error can't find the container with id d0324013db2c1459693b3fbd72304f0752df57a7cd33d25727c3ed30f22c49cd Nov 25 08:14:25 crc kubenswrapper[4760]: I1125 08:14:25.276124 4760 generic.go:334] "Generic (PLEG): container finished" podID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerID="7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270" exitCode=0 Nov 25 08:14:25 crc kubenswrapper[4760]: I1125 08:14:25.276668 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz8md" event={"ID":"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c","Type":"ContainerDied","Data":"7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270"} Nov 25 08:14:25 crc kubenswrapper[4760]: I1125 08:14:25.299865 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr5fk" event={"ID":"139fa8a2-b6c5-4624-9003-d418fdd22d55","Type":"ContainerStarted","Data":"2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0"} Nov 25 08:14:25 crc kubenswrapper[4760]: I1125 08:14:25.303160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" event={"ID":"deaf3f00-2bbd-4217-9414-5a6759e72b60","Type":"ContainerStarted","Data":"d0324013db2c1459693b3fbd72304f0752df57a7cd33d25727c3ed30f22c49cd"} Nov 25 08:14:25 crc kubenswrapper[4760]: E1125 08:14:25.303819 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qxnxz" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" Nov 25 08:14:25 crc kubenswrapper[4760]: E1125 08:14:25.304142 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-v89q8" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" Nov 25 08:14:26 crc kubenswrapper[4760]: I1125 08:14:26.311347 4760 generic.go:334] "Generic (PLEG): container finished" podID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerID="2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0" exitCode=0 Nov 25 08:14:26 crc kubenswrapper[4760]: I1125 08:14:26.311440 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr5fk" event={"ID":"139fa8a2-b6c5-4624-9003-d418fdd22d55","Type":"ContainerDied","Data":"2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0"} Nov 25 08:14:26 crc kubenswrapper[4760]: I1125 08:14:26.318360 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" event={"ID":"deaf3f00-2bbd-4217-9414-5a6759e72b60","Type":"ContainerStarted","Data":"a4edb8a832ce52add2ddbda9f97c34b2702f18de5a1897c921918d0e2e749cf1"} Nov 25 08:14:26 crc kubenswrapper[4760]: I1125 08:14:26.318407 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v2qd9" event={"ID":"deaf3f00-2bbd-4217-9414-5a6759e72b60","Type":"ContainerStarted","Data":"8adc13ce5c7c847b9253bfaed7729f51fa47083aec2b35e08df8dede70856114"} Nov 25 08:14:26 crc kubenswrapper[4760]: I1125 08:14:26.322359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz8md" event={"ID":"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c","Type":"ContainerStarted","Data":"d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9"} Nov 25 08:14:26 crc kubenswrapper[4760]: I1125 08:14:26.355474 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-v2qd9" podStartSLOduration=175.355434996 podStartE2EDuration="2m55.355434996s" podCreationTimestamp="2025-11-25 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:14:26.350898132 +0000 UTC m=+200.059928927" watchObservedRunningTime="2025-11-25 08:14:26.355434996 +0000 UTC m=+200.064465791" Nov 25 08:14:26 crc kubenswrapper[4760]: I1125 08:14:26.377552 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kz8md" podStartSLOduration=3.306096034 podStartE2EDuration="40.377523248s" podCreationTimestamp="2025-11-25 08:13:46 +0000 UTC" firstStartedPulling="2025-11-25 08:13:48.906496698 +0000 UTC m=+162.615527493" lastFinishedPulling="2025-11-25 08:14:25.977923912 +0000 UTC m=+199.686954707" observedRunningTime="2025-11-25 08:14:26.37185507 +0000 UTC m=+200.080885865" watchObservedRunningTime="2025-11-25 08:14:26.377523248 +0000 UTC m=+200.086554043" Nov 25 08:14:27 crc kubenswrapper[4760]: I1125 08:14:27.333089 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr5fk" event={"ID":"139fa8a2-b6c5-4624-9003-d418fdd22d55","Type":"ContainerStarted","Data":"e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514"} Nov 25 08:14:27 crc kubenswrapper[4760]: I1125 08:14:27.353461 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pr5fk" podStartSLOduration=3.5720788089999997 podStartE2EDuration="40.353442874s" podCreationTimestamp="2025-11-25 08:13:47 +0000 UTC" firstStartedPulling="2025-11-25 08:13:50.018289543 +0000 UTC m=+163.727320348" lastFinishedPulling="2025-11-25 08:14:26.799653618 +0000 UTC m=+200.508684413" observedRunningTime="2025-11-25 08:14:27.352812445 +0000 UTC m=+201.061843240" watchObservedRunningTime="2025-11-25 08:14:27.353442874 +0000 UTC m=+201.062473669" Nov 25 08:14:27 crc kubenswrapper[4760]: I1125 08:14:27.376072 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:14:27 crc kubenswrapper[4760]: I1125 08:14:27.376129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:14:27 crc kubenswrapper[4760]: I1125 08:14:27.965851 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:14:27 crc kubenswrapper[4760]: I1125 08:14:27.966096 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:14:28 crc kubenswrapper[4760]: I1125 08:14:28.547747 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-kz8md" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="registry-server" probeResult="failure" output=< Nov 25 08:14:28 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 08:14:28 crc kubenswrapper[4760]: > Nov 25 08:14:29 crc kubenswrapper[4760]: I1125 08:14:29.000894 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pr5fk" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="registry-server" probeResult="failure" output=< Nov 25 08:14:29 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 08:14:29 crc kubenswrapper[4760]: > Nov 25 08:14:31 crc kubenswrapper[4760]: I1125 08:14:31.745753 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:14:31 crc kubenswrapper[4760]: I1125 08:14:31.745813 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:14:35 crc kubenswrapper[4760]: I1125 08:14:35.384030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmt2d" event={"ID":"28856d66-d950-40a5-986c-0e3b0aa16949","Type":"ContainerStarted","Data":"f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d"} Nov 25 08:14:36 crc kubenswrapper[4760]: I1125 08:14:36.392074 4760 generic.go:334] "Generic (PLEG): container finished" podID="28856d66-d950-40a5-986c-0e3b0aa16949" containerID="f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d" exitCode=0 Nov 25 08:14:36 crc kubenswrapper[4760]: I1125 08:14:36.392151 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmt2d" event={"ID":"28856d66-d950-40a5-986c-0e3b0aa16949","Type":"ContainerDied","Data":"f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d"} Nov 25 08:14:36 crc kubenswrapper[4760]: I1125 08:14:36.443304 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bsp8l"] Nov 25 08:14:37 crc kubenswrapper[4760]: I1125 08:14:37.451687 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:14:37 crc kubenswrapper[4760]: I1125 08:14:37.491015 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:14:38 crc kubenswrapper[4760]: I1125 08:14:38.003701 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:14:38 crc kubenswrapper[4760]: I1125 08:14:38.048707 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.012364 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz8md"] Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.407803 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmt2d" event={"ID":"28856d66-d950-40a5-986c-0e3b0aa16949","Type":"ContainerStarted","Data":"379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8"} Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.411959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxnxz" event={"ID":"24249aa7-95c1-4bc5-8197-55975e7a49eb","Type":"ContainerStarted","Data":"ae2144fc5d9b5177ead0aeaa08b3492547d32be15fcf60190d6d1b8c1267931d"} Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.418747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v89q8" event={"ID":"dce383dd-3389-41fe-9223-ed5911c789fa","Type":"ContainerStarted","Data":"3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13"} Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.418979 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kz8md" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="registry-server" containerID="cri-o://d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9" gracePeriod=2 Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.428193 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmt2d" podStartSLOduration=2.236514053 podStartE2EDuration="53.428172941s" podCreationTimestamp="2025-11-25 08:13:46 +0000 UTC" firstStartedPulling="2025-11-25 08:13:47.914855929 +0000 UTC m=+161.623886724" lastFinishedPulling="2025-11-25 08:14:39.106514817 +0000 UTC m=+212.815545612" observedRunningTime="2025-11-25 08:14:39.424104621 +0000 UTC m=+213.133135416" watchObservedRunningTime="2025-11-25 08:14:39.428172941 +0000 UTC m=+213.137203736" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.806685 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.878577 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnhd7\" (UniqueName: \"kubernetes.io/projected/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-kube-api-access-mnhd7\") pod \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.878644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-utilities\") pod \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.878668 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-catalog-content\") pod \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\" (UID: \"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c\") " Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.888066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-kube-api-access-mnhd7" (OuterVolumeSpecName: "kube-api-access-mnhd7") pod "41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" (UID: "41d7c9cc-bd72-44d9-93f5-cd7475b2e17c"). InnerVolumeSpecName "kube-api-access-mnhd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.888239 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-utilities" (OuterVolumeSpecName: "utilities") pod "41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" (UID: "41d7c9cc-bd72-44d9-93f5-cd7475b2e17c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.903394 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" (UID: "41d7c9cc-bd72-44d9-93f5-cd7475b2e17c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.979429 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnhd7\" (UniqueName: \"kubernetes.io/projected/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-kube-api-access-mnhd7\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.979465 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:39 crc kubenswrapper[4760]: I1125 08:14:39.979477 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.426155 4760 generic.go:334] "Generic (PLEG): container finished" podID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerID="00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055" exitCode=0 Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.426360 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2frtd" event={"ID":"7e7d0153-ea78-4fd4-905a-0bd7fae6401f","Type":"ContainerDied","Data":"00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055"} Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.428756 4760 generic.go:334] "Generic (PLEG): container finished" podID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerID="ae2144fc5d9b5177ead0aeaa08b3492547d32be15fcf60190d6d1b8c1267931d" exitCode=0 Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.428805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxnxz" event={"ID":"24249aa7-95c1-4bc5-8197-55975e7a49eb","Type":"ContainerDied","Data":"ae2144fc5d9b5177ead0aeaa08b3492547d32be15fcf60190d6d1b8c1267931d"} Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.430592 4760 generic.go:334] "Generic (PLEG): container finished" podID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerID="9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0" exitCode=0 Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.430661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz6d4" event={"ID":"50b275d2-6236-4076-95b0-f2fab18a38f9","Type":"ContainerDied","Data":"9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0"} Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.432645 4760 generic.go:334] "Generic (PLEG): container finished" podID="dce383dd-3389-41fe-9223-ed5911c789fa" containerID="3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13" exitCode=0 Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.432702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v89q8" event={"ID":"dce383dd-3389-41fe-9223-ed5911c789fa","Type":"ContainerDied","Data":"3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13"} Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.437465 4760 generic.go:334] "Generic (PLEG): container finished" podID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerID="d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9" exitCode=0 Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.437507 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz8md" event={"ID":"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c","Type":"ContainerDied","Data":"d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9"} Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.437535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kz8md" event={"ID":"41d7c9cc-bd72-44d9-93f5-cd7475b2e17c","Type":"ContainerDied","Data":"ff9e090793a7760f22d88a638731813aa4c1ae31b1c0dc95b5bad663e45774f0"} Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.437555 4760 scope.go:117] "RemoveContainer" containerID="d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.437696 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kz8md" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.471104 4760 scope.go:117] "RemoveContainer" containerID="7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.485445 4760 scope.go:117] "RemoveContainer" containerID="4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.509480 4760 scope.go:117] "RemoveContainer" containerID="d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9" Nov 25 08:14:40 crc kubenswrapper[4760]: E1125 08:14:40.509932 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9\": container with ID starting with d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9 not found: ID does not exist" containerID="d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.509984 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9"} err="failed to get container status \"d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9\": rpc error: code = NotFound desc = could not find container \"d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9\": container with ID starting with d4ad7d4bb3b62d9bb25c3dceca28714dda906c79bb067d239993a3373ea519d9 not found: ID does not exist" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.510044 4760 scope.go:117] "RemoveContainer" containerID="7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270" Nov 25 08:14:40 crc kubenswrapper[4760]: E1125 08:14:40.510616 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270\": container with ID starting with 7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270 not found: ID does not exist" containerID="7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.510663 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270"} err="failed to get container status \"7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270\": rpc error: code = NotFound desc = could not find container \"7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270\": container with ID starting with 7b281d5f05f1f0362475ef9d72bdb5821cdbd8e8d0b627f453e30340381da270 not found: ID does not exist" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.510697 4760 scope.go:117] "RemoveContainer" containerID="4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139" Nov 25 08:14:40 crc kubenswrapper[4760]: E1125 08:14:40.511229 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139\": container with ID starting with 4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139 not found: ID does not exist" containerID="4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.511280 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139"} err="failed to get container status \"4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139\": rpc error: code = NotFound desc = could not find container \"4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139\": container with ID starting with 4f4c586b1a84c9d9ee15d785bc70b4183a3e77781b6066e89e4027a5cc8d9139 not found: ID does not exist" Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.527864 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz8md"] Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.533545 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kz8md"] Nov 25 08:14:40 crc kubenswrapper[4760]: I1125 08:14:40.951724 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" path="/var/lib/kubelet/pods/41d7c9cc-bd72-44d9-93f5-cd7475b2e17c/volumes" Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.448013 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v89q8" event={"ID":"dce383dd-3389-41fe-9223-ed5911c789fa","Type":"ContainerStarted","Data":"17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e"} Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.451142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2frtd" event={"ID":"7e7d0153-ea78-4fd4-905a-0bd7fae6401f","Type":"ContainerStarted","Data":"ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f"} Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.453236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxnxz" event={"ID":"24249aa7-95c1-4bc5-8197-55975e7a49eb","Type":"ContainerStarted","Data":"48c2c0fb25cb9fc6e57655062d6ce0e7fc865c1d7c528cd2325aa33967a513ab"} Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.456086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz6d4" event={"ID":"50b275d2-6236-4076-95b0-f2fab18a38f9","Type":"ContainerStarted","Data":"765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82"} Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.457578 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmxs" event={"ID":"d7d632dc-5fc1-4021-a4bc-366e2a89ea52","Type":"ContainerStarted","Data":"fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99"} Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.468294 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v89q8" podStartSLOduration=2.062159099 podStartE2EDuration="57.468276841s" podCreationTimestamp="2025-11-25 08:13:44 +0000 UTC" firstStartedPulling="2025-11-25 08:13:45.797499537 +0000 UTC m=+159.506530332" lastFinishedPulling="2025-11-25 08:14:41.203617279 +0000 UTC m=+214.912648074" observedRunningTime="2025-11-25 08:14:41.466374975 +0000 UTC m=+215.175405780" watchObservedRunningTime="2025-11-25 08:14:41.468276841 +0000 UTC m=+215.177307636" Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.488413 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qxnxz" podStartSLOduration=3.275640562 podStartE2EDuration="57.488396904s" podCreationTimestamp="2025-11-25 08:13:44 +0000 UTC" firstStartedPulling="2025-11-25 08:13:46.813189642 +0000 UTC m=+160.522220437" lastFinishedPulling="2025-11-25 08:14:41.025945984 +0000 UTC m=+214.734976779" observedRunningTime="2025-11-25 08:14:41.486831578 +0000 UTC m=+215.195862373" watchObservedRunningTime="2025-11-25 08:14:41.488396904 +0000 UTC m=+215.197427699" Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.504361 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qz6d4" podStartSLOduration=2.271825996 podStartE2EDuration="57.504341084s" podCreationTimestamp="2025-11-25 08:13:44 +0000 UTC" firstStartedPulling="2025-11-25 08:13:45.80203429 +0000 UTC m=+159.511065085" lastFinishedPulling="2025-11-25 08:14:41.034549378 +0000 UTC m=+214.743580173" observedRunningTime="2025-11-25 08:14:41.502319805 +0000 UTC m=+215.211350630" watchObservedRunningTime="2025-11-25 08:14:41.504341084 +0000 UTC m=+215.213371879" Nov 25 08:14:41 crc kubenswrapper[4760]: I1125 08:14:41.522888 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2frtd" podStartSLOduration=3.453210601 podStartE2EDuration="57.522684876s" podCreationTimestamp="2025-11-25 08:13:44 +0000 UTC" firstStartedPulling="2025-11-25 08:13:46.820898538 +0000 UTC m=+160.529929333" lastFinishedPulling="2025-11-25 08:14:40.890372813 +0000 UTC m=+214.599403608" observedRunningTime="2025-11-25 08:14:41.520914044 +0000 UTC m=+215.229944829" watchObservedRunningTime="2025-11-25 08:14:41.522684876 +0000 UTC m=+215.231715671" Nov 25 08:14:42 crc kubenswrapper[4760]: I1125 08:14:42.464011 4760 generic.go:334] "Generic (PLEG): container finished" podID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerID="fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99" exitCode=0 Nov 25 08:14:42 crc kubenswrapper[4760]: I1125 08:14:42.464062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmxs" event={"ID":"d7d632dc-5fc1-4021-a4bc-366e2a89ea52","Type":"ContainerDied","Data":"fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99"} Nov 25 08:14:43 crc kubenswrapper[4760]: I1125 08:14:43.472117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmxs" event={"ID":"d7d632dc-5fc1-4021-a4bc-366e2a89ea52","Type":"ContainerStarted","Data":"3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c"} Nov 25 08:14:43 crc kubenswrapper[4760]: I1125 08:14:43.497288 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9nmxs" podStartSLOduration=3.296336988 podStartE2EDuration="56.497265111s" podCreationTimestamp="2025-11-25 08:13:47 +0000 UTC" firstStartedPulling="2025-11-25 08:13:49.971883431 +0000 UTC m=+163.680914226" lastFinishedPulling="2025-11-25 08:14:43.172811554 +0000 UTC m=+216.881842349" observedRunningTime="2025-11-25 08:14:43.493310945 +0000 UTC m=+217.202341760" watchObservedRunningTime="2025-11-25 08:14:43.497265111 +0000 UTC m=+217.206295906" Nov 25 08:14:44 crc kubenswrapper[4760]: I1125 08:14:44.725919 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:14:44 crc kubenswrapper[4760]: I1125 08:14:44.725982 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:14:44 crc kubenswrapper[4760]: I1125 08:14:44.771974 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:14:44 crc kubenswrapper[4760]: I1125 08:14:44.932275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:14:44 crc kubenswrapper[4760]: I1125 08:14:44.932333 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:14:44 crc kubenswrapper[4760]: I1125 08:14:44.975713 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:14:45 crc kubenswrapper[4760]: I1125 08:14:45.150398 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:14:45 crc kubenswrapper[4760]: I1125 08:14:45.151136 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:14:45 crc kubenswrapper[4760]: I1125 08:14:45.187123 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:14:45 crc kubenswrapper[4760]: I1125 08:14:45.339417 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:14:45 crc kubenswrapper[4760]: I1125 08:14:45.339751 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:14:45 crc kubenswrapper[4760]: I1125 08:14:45.377170 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:14:46 crc kubenswrapper[4760]: I1125 08:14:46.529232 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:14:47 crc kubenswrapper[4760]: I1125 08:14:47.044790 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:14:47 crc kubenswrapper[4760]: I1125 08:14:47.044845 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:14:47 crc kubenswrapper[4760]: I1125 08:14:47.084971 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:14:47 crc kubenswrapper[4760]: I1125 08:14:47.525733 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:14:48 crc kubenswrapper[4760]: I1125 08:14:48.442367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:14:48 crc kubenswrapper[4760]: I1125 08:14:48.442648 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:14:48 crc kubenswrapper[4760]: I1125 08:14:48.479120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:14:48 crc kubenswrapper[4760]: I1125 08:14:48.531049 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.410861 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2frtd"] Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.411989 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2frtd" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="registry-server" containerID="cri-o://ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f" gracePeriod=2 Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.741935 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.901745 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-catalog-content\") pod \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.901828 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsk2s\" (UniqueName: \"kubernetes.io/projected/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-kube-api-access-jsk2s\") pod \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.901866 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-utilities\") pod \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\" (UID: \"7e7d0153-ea78-4fd4-905a-0bd7fae6401f\") " Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.902589 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-utilities" (OuterVolumeSpecName: "utilities") pod "7e7d0153-ea78-4fd4-905a-0bd7fae6401f" (UID: "7e7d0153-ea78-4fd4-905a-0bd7fae6401f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.906701 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-kube-api-access-jsk2s" (OuterVolumeSpecName: "kube-api-access-jsk2s") pod "7e7d0153-ea78-4fd4-905a-0bd7fae6401f" (UID: "7e7d0153-ea78-4fd4-905a-0bd7fae6401f"). InnerVolumeSpecName "kube-api-access-jsk2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:14:49 crc kubenswrapper[4760]: I1125 08:14:49.952659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e7d0153-ea78-4fd4-905a-0bd7fae6401f" (UID: "7e7d0153-ea78-4fd4-905a-0bd7fae6401f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.003054 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.003096 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsk2s\" (UniqueName: \"kubernetes.io/projected/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-kube-api-access-jsk2s\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.003110 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7d0153-ea78-4fd4-905a-0bd7fae6401f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.509156 4760 generic.go:334] "Generic (PLEG): container finished" podID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerID="ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f" exitCode=0 Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.509595 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2frtd" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.509823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2frtd" event={"ID":"7e7d0153-ea78-4fd4-905a-0bd7fae6401f","Type":"ContainerDied","Data":"ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f"} Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.510229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2frtd" event={"ID":"7e7d0153-ea78-4fd4-905a-0bd7fae6401f","Type":"ContainerDied","Data":"92f68487da188224702f523c7c05ba7e69f6db64e022d5a695cac8ca08047764"} Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.510334 4760 scope.go:117] "RemoveContainer" containerID="ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.540967 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2frtd"] Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.545695 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2frtd"] Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.545957 4760 scope.go:117] "RemoveContainer" containerID="00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.569171 4760 scope.go:117] "RemoveContainer" containerID="cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.582947 4760 scope.go:117] "RemoveContainer" containerID="ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f" Nov 25 08:14:50 crc kubenswrapper[4760]: E1125 08:14:50.583730 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f\": container with ID starting with ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f not found: ID does not exist" containerID="ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.583880 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f"} err="failed to get container status \"ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f\": rpc error: code = NotFound desc = could not find container \"ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f\": container with ID starting with ee97d3cd30e24455536757b8cc7c98108536436c10e12d5a105ea49de5df461f not found: ID does not exist" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.583990 4760 scope.go:117] "RemoveContainer" containerID="00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055" Nov 25 08:14:50 crc kubenswrapper[4760]: E1125 08:14:50.584451 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055\": container with ID starting with 00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055 not found: ID does not exist" containerID="00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.584575 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055"} err="failed to get container status \"00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055\": rpc error: code = NotFound desc = could not find container \"00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055\": container with ID starting with 00af67a63cac1414cd559dca1183c985a9b972ca577d707afd1f4f586903b055 not found: ID does not exist" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.584673 4760 scope.go:117] "RemoveContainer" containerID="cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164" Nov 25 08:14:50 crc kubenswrapper[4760]: E1125 08:14:50.585029 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164\": container with ID starting with cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164 not found: ID does not exist" containerID="cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.585151 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164"} err="failed to get container status \"cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164\": rpc error: code = NotFound desc = could not find container \"cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164\": container with ID starting with cad6e6c935034dae9f11b3a319f5e858ad8993c517ff93bf1f8ac0c901b7b164 not found: ID does not exist" Nov 25 08:14:50 crc kubenswrapper[4760]: I1125 08:14:50.954387 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" path="/var/lib/kubelet/pods/7e7d0153-ea78-4fd4-905a-0bd7fae6401f/volumes" Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.413928 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nmxs"] Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.414204 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9nmxs" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="registry-server" containerID="cri-o://3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c" gracePeriod=2 Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.768059 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.940620 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-catalog-content\") pod \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.940787 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h2wr\" (UniqueName: \"kubernetes.io/projected/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-kube-api-access-8h2wr\") pod \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.940879 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-utilities\") pod \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\" (UID: \"d7d632dc-5fc1-4021-a4bc-366e2a89ea52\") " Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.941956 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-utilities" (OuterVolumeSpecName: "utilities") pod "d7d632dc-5fc1-4021-a4bc-366e2a89ea52" (UID: "d7d632dc-5fc1-4021-a4bc-366e2a89ea52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:52 crc kubenswrapper[4760]: I1125 08:14:52.948915 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-kube-api-access-8h2wr" (OuterVolumeSpecName: "kube-api-access-8h2wr") pod "d7d632dc-5fc1-4021-a4bc-366e2a89ea52" (UID: "d7d632dc-5fc1-4021-a4bc-366e2a89ea52"). InnerVolumeSpecName "kube-api-access-8h2wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.043551 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h2wr\" (UniqueName: \"kubernetes.io/projected/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-kube-api-access-8h2wr\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.043604 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.354837 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7d632dc-5fc1-4021-a4bc-366e2a89ea52" (UID: "d7d632dc-5fc1-4021-a4bc-366e2a89ea52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.449135 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7d632dc-5fc1-4021-a4bc-366e2a89ea52-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.526916 4760 generic.go:334] "Generic (PLEG): container finished" podID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerID="3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c" exitCode=0 Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.526984 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmxs" event={"ID":"d7d632dc-5fc1-4021-a4bc-366e2a89ea52","Type":"ContainerDied","Data":"3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c"} Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.527010 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nmxs" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.527027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nmxs" event={"ID":"d7d632dc-5fc1-4021-a4bc-366e2a89ea52","Type":"ContainerDied","Data":"5d22cd4ee7fb4019b5e5251d9cce3ffe7b42ccef9a8b7b66ce22c78a1d3d401b"} Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.527055 4760 scope.go:117] "RemoveContainer" containerID="3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.550385 4760 scope.go:117] "RemoveContainer" containerID="fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.560033 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nmxs"] Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.566586 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9nmxs"] Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.580705 4760 scope.go:117] "RemoveContainer" containerID="3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.597335 4760 scope.go:117] "RemoveContainer" containerID="3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c" Nov 25 08:14:53 crc kubenswrapper[4760]: E1125 08:14:53.597995 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c\": container with ID starting with 3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c not found: ID does not exist" containerID="3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.598038 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c"} err="failed to get container status \"3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c\": rpc error: code = NotFound desc = could not find container \"3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c\": container with ID starting with 3917b30053f595103dba8663470aa687ccd025c6722688c6a0c1f25359936f4c not found: ID does not exist" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.598067 4760 scope.go:117] "RemoveContainer" containerID="fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99" Nov 25 08:14:53 crc kubenswrapper[4760]: E1125 08:14:53.598531 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99\": container with ID starting with fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99 not found: ID does not exist" containerID="fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.598577 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99"} err="failed to get container status \"fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99\": rpc error: code = NotFound desc = could not find container \"fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99\": container with ID starting with fc8bb0c325d8ec9c46deb9c8d29ab51652935857ac62f8298fccb1c7e06c9b99 not found: ID does not exist" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.598609 4760 scope.go:117] "RemoveContainer" containerID="3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa" Nov 25 08:14:53 crc kubenswrapper[4760]: E1125 08:14:53.598936 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa\": container with ID starting with 3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa not found: ID does not exist" containerID="3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa" Nov 25 08:14:53 crc kubenswrapper[4760]: I1125 08:14:53.598960 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa"} err="failed to get container status \"3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa\": rpc error: code = NotFound desc = could not find container \"3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa\": container with ID starting with 3dcdacf0a32be7e9905fbbadea6c9b1624f7db870cb9c8291b2aef81940c27fa not found: ID does not exist" Nov 25 08:14:54 crc kubenswrapper[4760]: I1125 08:14:54.775408 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:14:54 crc kubenswrapper[4760]: I1125 08:14:54.957941 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" path="/var/lib/kubelet/pods/d7d632dc-5fc1-4021-a4bc-366e2a89ea52/volumes" Nov 25 08:14:54 crc kubenswrapper[4760]: I1125 08:14:54.983961 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:14:55 crc kubenswrapper[4760]: I1125 08:14:55.374137 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.213481 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxnxz"] Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.213722 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qxnxz" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="registry-server" containerID="cri-o://48c2c0fb25cb9fc6e57655062d6ce0e7fc865c1d7c528cd2325aa33967a513ab" gracePeriod=2 Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.554105 4760 generic.go:334] "Generic (PLEG): container finished" podID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerID="48c2c0fb25cb9fc6e57655062d6ce0e7fc865c1d7c528cd2325aa33967a513ab" exitCode=0 Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.554162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxnxz" event={"ID":"24249aa7-95c1-4bc5-8197-55975e7a49eb","Type":"ContainerDied","Data":"48c2c0fb25cb9fc6e57655062d6ce0e7fc865c1d7c528cd2325aa33967a513ab"} Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.554519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxnxz" event={"ID":"24249aa7-95c1-4bc5-8197-55975e7a49eb","Type":"ContainerDied","Data":"41ebe2ba87724e7d02a0d451592446636ad3085b97d8bbbe0b05946db82e139c"} Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.554554 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ebe2ba87724e7d02a0d451592446636ad3085b97d8bbbe0b05946db82e139c" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.555207 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.708215 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2848t\" (UniqueName: \"kubernetes.io/projected/24249aa7-95c1-4bc5-8197-55975e7a49eb-kube-api-access-2848t\") pod \"24249aa7-95c1-4bc5-8197-55975e7a49eb\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.708427 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-utilities\") pod \"24249aa7-95c1-4bc5-8197-55975e7a49eb\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.708470 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-catalog-content\") pod \"24249aa7-95c1-4bc5-8197-55975e7a49eb\" (UID: \"24249aa7-95c1-4bc5-8197-55975e7a49eb\") " Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.709889 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-utilities" (OuterVolumeSpecName: "utilities") pod "24249aa7-95c1-4bc5-8197-55975e7a49eb" (UID: "24249aa7-95c1-4bc5-8197-55975e7a49eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.713952 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24249aa7-95c1-4bc5-8197-55975e7a49eb-kube-api-access-2848t" (OuterVolumeSpecName: "kube-api-access-2848t") pod "24249aa7-95c1-4bc5-8197-55975e7a49eb" (UID: "24249aa7-95c1-4bc5-8197-55975e7a49eb"). InnerVolumeSpecName "kube-api-access-2848t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.757985 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24249aa7-95c1-4bc5-8197-55975e7a49eb" (UID: "24249aa7-95c1-4bc5-8197-55975e7a49eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.816441 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.816749 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24249aa7-95c1-4bc5-8197-55975e7a49eb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:57 crc kubenswrapper[4760]: I1125 08:14:57.816907 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2848t\" (UniqueName: \"kubernetes.io/projected/24249aa7-95c1-4bc5-8197-55975e7a49eb-kube-api-access-2848t\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:58 crc kubenswrapper[4760]: I1125 08:14:58.558675 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxnxz" Nov 25 08:14:58 crc kubenswrapper[4760]: I1125 08:14:58.584422 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxnxz"] Nov 25 08:14:58 crc kubenswrapper[4760]: I1125 08:14:58.597731 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qxnxz"] Nov 25 08:14:58 crc kubenswrapper[4760]: I1125 08:14:58.944803 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" path="/var/lib/kubelet/pods/24249aa7-95c1-4bc5-8197-55975e7a49eb/volumes" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.132564 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w"] Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.132869 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.132892 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.132912 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675e12e4-33ba-4bca-a1bc-28f8a95e88df" containerName="pruner" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.132923 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="675e12e4-33ba-4bca-a1bc-28f8a95e88df" containerName="pruner" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.132939 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.132950 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.132970 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.132980 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.132996 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cb0846-7f5a-414c-9dc6-f00b403cad33" containerName="pruner" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133006 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cb0846-7f5a-414c-9dc6-f00b403cad33" containerName="pruner" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133023 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133035 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133051 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133061 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133074 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133083 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133095 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133105 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133119 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133129 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133144 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133156 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133175 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133186 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133198 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133209 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4760]: E1125 08:15:00.133225 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133236 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133411 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cb0846-7f5a-414c-9dc6-f00b403cad33" containerName="pruner" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133429 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="24249aa7-95c1-4bc5-8197-55975e7a49eb" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133449 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7d0153-ea78-4fd4-905a-0bd7fae6401f" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133461 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d632dc-5fc1-4021-a4bc-366e2a89ea52" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133474 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="675e12e4-33ba-4bca-a1bc-28f8a95e88df" containerName="pruner" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.133485 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d7c9cc-bd72-44d9-93f5-cd7475b2e17c" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.134029 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.137884 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.139529 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.141719 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w"] Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.143685 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6bxz\" (UniqueName: \"kubernetes.io/projected/e8c22b69-2113-4060-9ec8-fea251da8846-kube-api-access-j6bxz\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.143721 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c22b69-2113-4060-9ec8-fea251da8846-config-volume\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.143775 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8c22b69-2113-4060-9ec8-fea251da8846-secret-volume\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.245057 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8c22b69-2113-4060-9ec8-fea251da8846-secret-volume\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.246374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6bxz\" (UniqueName: \"kubernetes.io/projected/e8c22b69-2113-4060-9ec8-fea251da8846-kube-api-access-j6bxz\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.246467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c22b69-2113-4060-9ec8-fea251da8846-config-volume\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.247334 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c22b69-2113-4060-9ec8-fea251da8846-config-volume\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.249393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8c22b69-2113-4060-9ec8-fea251da8846-secret-volume\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.263530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6bxz\" (UniqueName: \"kubernetes.io/projected/e8c22b69-2113-4060-9ec8-fea251da8846-kube-api-access-j6bxz\") pod \"collect-profiles-29400975-llf4w\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.455574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:00 crc kubenswrapper[4760]: I1125 08:15:00.639002 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w"] Nov 25 08:15:00 crc kubenswrapper[4760]: W1125 08:15:00.648072 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c22b69_2113_4060_9ec8_fea251da8846.slice/crio-ac7d51c30c15529838b1e652ba1fe4082823bcafccab131cae64735549d387da WatchSource:0}: Error finding container ac7d51c30c15529838b1e652ba1fe4082823bcafccab131cae64735549d387da: Status 404 returned error can't find the container with id ac7d51c30c15529838b1e652ba1fe4082823bcafccab131cae64735549d387da Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.475023 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" podUID="6fce3bec-6d01-47d6-aa9e-ca61f62921c8" containerName="oauth-openshift" containerID="cri-o://2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1" gracePeriod=15 Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.573685 4760 generic.go:334] "Generic (PLEG): container finished" podID="e8c22b69-2113-4060-9ec8-fea251da8846" containerID="37de50c433c0633c27cda1eb5db0916787750c3256b9dddf18829eacf751ef26" exitCode=0 Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.573781 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" event={"ID":"e8c22b69-2113-4060-9ec8-fea251da8846","Type":"ContainerDied","Data":"37de50c433c0633c27cda1eb5db0916787750c3256b9dddf18829eacf751ef26"} Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.573835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" event={"ID":"e8c22b69-2113-4060-9ec8-fea251da8846","Type":"ContainerStarted","Data":"ac7d51c30c15529838b1e652ba1fe4082823bcafccab131cae64735549d387da"} Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.746376 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.746474 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.746537 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.747394 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.747467 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284" gracePeriod=600 Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.876369 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.978820 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-login\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979250 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-cliconfig\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979351 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-serving-cert\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979388 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-router-certs\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-session\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-trusted-ca-bundle\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp6vm\" (UniqueName: \"kubernetes.io/projected/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-kube-api-access-wp6vm\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979521 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-service-ca\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-policies\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-provider-selection\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979632 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-error\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979660 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-idp-0-file-data\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-dir\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.979712 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-ocp-branding-template\") pod \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\" (UID: \"6fce3bec-6d01-47d6-aa9e-ca61f62921c8\") " Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.980169 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.981131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.981603 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.981613 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.981833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.988341 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-kube-api-access-wp6vm" (OuterVolumeSpecName: "kube-api-access-wp6vm") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "kube-api-access-wp6vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.988411 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.988823 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.989245 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.989528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.989742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.989875 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.990208 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:01 crc kubenswrapper[4760]: I1125 08:15:01.990311 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6fce3bec-6d01-47d6-aa9e-ca61f62921c8" (UID: "6fce3bec-6d01-47d6-aa9e-ca61f62921c8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081366 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081418 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081433 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081444 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081455 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081467 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081479 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081493 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081506 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081519 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081529 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081542 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081551 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.081561 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp6vm\" (UniqueName: \"kubernetes.io/projected/6fce3bec-6d01-47d6-aa9e-ca61f62921c8-kube-api-access-wp6vm\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.126904 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl"] Nov 25 08:15:02 crc kubenswrapper[4760]: E1125 08:15:02.127107 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fce3bec-6d01-47d6-aa9e-ca61f62921c8" containerName="oauth-openshift" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.127117 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fce3bec-6d01-47d6-aa9e-ca61f62921c8" containerName="oauth-openshift" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.127209 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fce3bec-6d01-47d6-aa9e-ca61f62921c8" containerName="oauth-openshift" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.127609 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.143182 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl"] Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.283823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-session\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.283901 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.283926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd22v\" (UniqueName: \"kubernetes.io/projected/6bf81e37-9e62-4d92-9016-8e44bc396b82-kube-api-access-xd22v\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bf81e37-9e62-4d92-9016-8e44bc396b82-audit-dir\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-error\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284172 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-audit-policies\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284313 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284387 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284405 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284522 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-login\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.284552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385204 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-login\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385293 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385343 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-session\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385401 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385431 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd22v\" (UniqueName: \"kubernetes.io/projected/6bf81e37-9e62-4d92-9016-8e44bc396b82-kube-api-access-xd22v\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385485 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bf81e37-9e62-4d92-9016-8e44bc396b82-audit-dir\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-audit-policies\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-error\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385701 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385749 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.385767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bf81e37-9e62-4d92-9016-8e44bc396b82-audit-dir\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.386551 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.386552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-audit-policies\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.386836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.387106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.389998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.390060 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-login\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.390761 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-error\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.391028 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.391212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-session\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.391236 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.391786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.402407 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bf81e37-9e62-4d92-9016-8e44bc396b82-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.403259 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd22v\" (UniqueName: \"kubernetes.io/projected/6bf81e37-9e62-4d92-9016-8e44bc396b82-kube-api-access-xd22v\") pod \"oauth-openshift-6bbf4c9fdf-s2qsl\" (UID: \"6bf81e37-9e62-4d92-9016-8e44bc396b82\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.445902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.596679 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284" exitCode=0 Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.596792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284"} Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.596859 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"308caa4dc448cb9739c473fcfee251cdc29a87eaebc0beb1e3567269bf4c7aa2"} Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.599025 4760 generic.go:334] "Generic (PLEG): container finished" podID="6fce3bec-6d01-47d6-aa9e-ca61f62921c8" containerID="2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1" exitCode=0 Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.599214 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.600116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" event={"ID":"6fce3bec-6d01-47d6-aa9e-ca61f62921c8","Type":"ContainerDied","Data":"2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1"} Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.600194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bsp8l" event={"ID":"6fce3bec-6d01-47d6-aa9e-ca61f62921c8","Type":"ContainerDied","Data":"1a71cb68f18b4aedf5744fd67fce57602e1d49824a65249552de2a32db401d39"} Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.600231 4760 scope.go:117] "RemoveContainer" containerID="2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.641655 4760 scope.go:117] "RemoveContainer" containerID="2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1" Nov 25 08:15:02 crc kubenswrapper[4760]: E1125 08:15:02.643048 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1\": container with ID starting with 2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1 not found: ID does not exist" containerID="2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.643079 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1"} err="failed to get container status \"2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1\": rpc error: code = NotFound desc = could not find container \"2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1\": container with ID starting with 2bc0c279aa4c88ccafe8abf424a294d1fd2aebf792a6e76641de9fd8c7233cc1 not found: ID does not exist" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.648152 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bsp8l"] Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.651099 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bsp8l"] Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.658153 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl"] Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.835162 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.944092 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fce3bec-6d01-47d6-aa9e-ca61f62921c8" path="/var/lib/kubelet/pods/6fce3bec-6d01-47d6-aa9e-ca61f62921c8/volumes" Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.992641 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8c22b69-2113-4060-9ec8-fea251da8846-secret-volume\") pod \"e8c22b69-2113-4060-9ec8-fea251da8846\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.992711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6bxz\" (UniqueName: \"kubernetes.io/projected/e8c22b69-2113-4060-9ec8-fea251da8846-kube-api-access-j6bxz\") pod \"e8c22b69-2113-4060-9ec8-fea251da8846\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.992773 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c22b69-2113-4060-9ec8-fea251da8846-config-volume\") pod \"e8c22b69-2113-4060-9ec8-fea251da8846\" (UID: \"e8c22b69-2113-4060-9ec8-fea251da8846\") " Nov 25 08:15:02 crc kubenswrapper[4760]: I1125 08:15:02.993589 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c22b69-2113-4060-9ec8-fea251da8846-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8c22b69-2113-4060-9ec8-fea251da8846" (UID: "e8c22b69-2113-4060-9ec8-fea251da8846"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.002956 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c22b69-2113-4060-9ec8-fea251da8846-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8c22b69-2113-4060-9ec8-fea251da8846" (UID: "e8c22b69-2113-4060-9ec8-fea251da8846"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.003035 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c22b69-2113-4060-9ec8-fea251da8846-kube-api-access-j6bxz" (OuterVolumeSpecName: "kube-api-access-j6bxz") pod "e8c22b69-2113-4060-9ec8-fea251da8846" (UID: "e8c22b69-2113-4060-9ec8-fea251da8846"). InnerVolumeSpecName "kube-api-access-j6bxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.094052 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8c22b69-2113-4060-9ec8-fea251da8846-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.094105 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8c22b69-2113-4060-9ec8-fea251da8846-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.094120 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6bxz\" (UniqueName: \"kubernetes.io/projected/e8c22b69-2113-4060-9ec8-fea251da8846-kube-api-access-j6bxz\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.606597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" event={"ID":"6bf81e37-9e62-4d92-9016-8e44bc396b82","Type":"ContainerStarted","Data":"ca3331002ab11f8421fadc104cbd58130d4f3e79b64feb98c6e7e792f2a5f649"} Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.607032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" event={"ID":"6bf81e37-9e62-4d92-9016-8e44bc396b82","Type":"ContainerStarted","Data":"87a18562e93f9b1085c21a0800192261b9a013fb1a084826853b1e9200e0d6ce"} Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.607055 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.608476 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" event={"ID":"e8c22b69-2113-4060-9ec8-fea251da8846","Type":"ContainerDied","Data":"ac7d51c30c15529838b1e652ba1fe4082823bcafccab131cae64735549d387da"} Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.608512 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7d51c30c15529838b1e652ba1fe4082823bcafccab131cae64735549d387da" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.608521 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.614846 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" Nov 25 08:15:03 crc kubenswrapper[4760]: I1125 08:15:03.628112 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-s2qsl" podStartSLOduration=27.628097434 podStartE2EDuration="27.628097434s" podCreationTimestamp="2025-11-25 08:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:15:03.626737954 +0000 UTC m=+237.335768749" watchObservedRunningTime="2025-11-25 08:15:03.628097434 +0000 UTC m=+237.337128219" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.600651 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz6d4"] Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.601482 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qz6d4" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="registry-server" containerID="cri-o://765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82" gracePeriod=30 Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.613938 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v89q8"] Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.615403 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v89q8" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="registry-server" containerID="cri-o://17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e" gracePeriod=30 Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.627389 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-km6r5"] Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.627681 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerName="marketplace-operator" containerID="cri-o://04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02" gracePeriod=30 Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.638969 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmt2d"] Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.639344 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vmt2d" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="registry-server" containerID="cri-o://379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8" gracePeriod=30 Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.662071 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pr5fk"] Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.662399 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pr5fk" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="registry-server" containerID="cri-o://e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514" gracePeriod=30 Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.666931 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8s28s"] Nov 25 08:15:21 crc kubenswrapper[4760]: E1125 08:15:21.667184 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c22b69-2113-4060-9ec8-fea251da8846" containerName="collect-profiles" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.667202 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c22b69-2113-4060-9ec8-fea251da8846" containerName="collect-profiles" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.667338 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c22b69-2113-4060-9ec8-fea251da8846" containerName="collect-profiles" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.667924 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.674321 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8s28s"] Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.716437 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79fgr\" (UniqueName: \"kubernetes.io/projected/613c9059-f285-4892-96c6-e27686513a0a-kube-api-access-79fgr\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.716547 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/613c9059-f285-4892-96c6-e27686513a0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.716593 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/613c9059-f285-4892-96c6-e27686513a0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.817364 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79fgr\" (UniqueName: \"kubernetes.io/projected/613c9059-f285-4892-96c6-e27686513a0a-kube-api-access-79fgr\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.817733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/613c9059-f285-4892-96c6-e27686513a0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.817768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/613c9059-f285-4892-96c6-e27686513a0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.819432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/613c9059-f285-4892-96c6-e27686513a0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.824719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/613c9059-f285-4892-96c6-e27686513a0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:21 crc kubenswrapper[4760]: I1125 08:15:21.834901 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79fgr\" (UniqueName: \"kubernetes.io/projected/613c9059-f285-4892-96c6-e27686513a0a-kube-api-access-79fgr\") pod \"marketplace-operator-79b997595-8s28s\" (UID: \"613c9059-f285-4892-96c6-e27686513a0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.038503 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.048151 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.059015 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.060818 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.120059 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.121525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-utilities\") pod \"28856d66-d950-40a5-986c-0e3b0aa16949\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.121760 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-utilities\") pod \"dce383dd-3389-41fe-9223-ed5911c789fa\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.122055 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-catalog-content\") pod \"50b275d2-6236-4076-95b0-f2fab18a38f9\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.122172 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwsh6\" (UniqueName: \"kubernetes.io/projected/dce383dd-3389-41fe-9223-ed5911c789fa-kube-api-access-fwsh6\") pod \"dce383dd-3389-41fe-9223-ed5911c789fa\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.122302 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-catalog-content\") pod \"28856d66-d950-40a5-986c-0e3b0aa16949\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.122406 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6q5\" (UniqueName: \"kubernetes.io/projected/28856d66-d950-40a5-986c-0e3b0aa16949-kube-api-access-wm6q5\") pod \"28856d66-d950-40a5-986c-0e3b0aa16949\" (UID: \"28856d66-d950-40a5-986c-0e3b0aa16949\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.122626 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk52j\" (UniqueName: \"kubernetes.io/projected/50b275d2-6236-4076-95b0-f2fab18a38f9-kube-api-access-gk52j\") pod \"50b275d2-6236-4076-95b0-f2fab18a38f9\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.122768 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-utilities\") pod \"50b275d2-6236-4076-95b0-f2fab18a38f9\" (UID: \"50b275d2-6236-4076-95b0-f2fab18a38f9\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.123164 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-catalog-content\") pod \"dce383dd-3389-41fe-9223-ed5911c789fa\" (UID: \"dce383dd-3389-41fe-9223-ed5911c789fa\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.123376 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-utilities" (OuterVolumeSpecName: "utilities") pod "28856d66-d950-40a5-986c-0e3b0aa16949" (UID: "28856d66-d950-40a5-986c-0e3b0aa16949"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.123617 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-utilities" (OuterVolumeSpecName: "utilities") pod "dce383dd-3389-41fe-9223-ed5911c789fa" (UID: "dce383dd-3389-41fe-9223-ed5911c789fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.123874 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.124064 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.124439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-utilities" (OuterVolumeSpecName: "utilities") pod "50b275d2-6236-4076-95b0-f2fab18a38f9" (UID: "50b275d2-6236-4076-95b0-f2fab18a38f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.126466 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28856d66-d950-40a5-986c-0e3b0aa16949-kube-api-access-wm6q5" (OuterVolumeSpecName: "kube-api-access-wm6q5") pod "28856d66-d950-40a5-986c-0e3b0aa16949" (UID: "28856d66-d950-40a5-986c-0e3b0aa16949"). InnerVolumeSpecName "kube-api-access-wm6q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.128010 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b275d2-6236-4076-95b0-f2fab18a38f9-kube-api-access-gk52j" (OuterVolumeSpecName: "kube-api-access-gk52j") pod "50b275d2-6236-4076-95b0-f2fab18a38f9" (UID: "50b275d2-6236-4076-95b0-f2fab18a38f9"). InnerVolumeSpecName "kube-api-access-gk52j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.128645 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce383dd-3389-41fe-9223-ed5911c789fa-kube-api-access-fwsh6" (OuterVolumeSpecName: "kube-api-access-fwsh6") pod "dce383dd-3389-41fe-9223-ed5911c789fa" (UID: "dce383dd-3389-41fe-9223-ed5911c789fa"). InnerVolumeSpecName "kube-api-access-fwsh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.163804 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.168637 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28856d66-d950-40a5-986c-0e3b0aa16949" (UID: "28856d66-d950-40a5-986c-0e3b0aa16949"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.198557 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dce383dd-3389-41fe-9223-ed5911c789fa" (UID: "dce383dd-3389-41fe-9223-ed5911c789fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.225110 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrr4x\" (UniqueName: \"kubernetes.io/projected/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-kube-api-access-wrr4x\") pod \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.225535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-operator-metrics\") pod \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.225568 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-utilities\") pod \"139fa8a2-b6c5-4624-9003-d418fdd22d55\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.225608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-trusted-ca\") pod \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\" (UID: \"aec2d73b-e942-4f98-9b84-539bcc3e6fa8\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.226631 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aec2d73b-e942-4f98-9b84-539bcc3e6fa8" (UID: "aec2d73b-e942-4f98-9b84-539bcc3e6fa8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.226759 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-utilities" (OuterVolumeSpecName: "utilities") pod "139fa8a2-b6c5-4624-9003-d418fdd22d55" (UID: "139fa8a2-b6c5-4624-9003-d418fdd22d55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.226845 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-catalog-content\") pod \"139fa8a2-b6c5-4624-9003-d418fdd22d55\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.226882 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xctnm\" (UniqueName: \"kubernetes.io/projected/139fa8a2-b6c5-4624-9003-d418fdd22d55-kube-api-access-xctnm\") pod \"139fa8a2-b6c5-4624-9003-d418fdd22d55\" (UID: \"139fa8a2-b6c5-4624-9003-d418fdd22d55\") " Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227172 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28856d66-d950-40a5-986c-0e3b0aa16949-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227191 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6q5\" (UniqueName: \"kubernetes.io/projected/28856d66-d950-40a5-986c-0e3b0aa16949-kube-api-access-wm6q5\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227207 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk52j\" (UniqueName: \"kubernetes.io/projected/50b275d2-6236-4076-95b0-f2fab18a38f9-kube-api-access-gk52j\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227220 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227232 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce383dd-3389-41fe-9223-ed5911c789fa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227263 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227277 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.227290 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwsh6\" (UniqueName: \"kubernetes.io/projected/dce383dd-3389-41fe-9223-ed5911c789fa-kube-api-access-fwsh6\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.230269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-kube-api-access-wrr4x" (OuterVolumeSpecName: "kube-api-access-wrr4x") pod "aec2d73b-e942-4f98-9b84-539bcc3e6fa8" (UID: "aec2d73b-e942-4f98-9b84-539bcc3e6fa8"). InnerVolumeSpecName "kube-api-access-wrr4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.231766 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139fa8a2-b6c5-4624-9003-d418fdd22d55-kube-api-access-xctnm" (OuterVolumeSpecName: "kube-api-access-xctnm") pod "139fa8a2-b6c5-4624-9003-d418fdd22d55" (UID: "139fa8a2-b6c5-4624-9003-d418fdd22d55"). InnerVolumeSpecName "kube-api-access-xctnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.232042 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aec2d73b-e942-4f98-9b84-539bcc3e6fa8" (UID: "aec2d73b-e942-4f98-9b84-539bcc3e6fa8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.266044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50b275d2-6236-4076-95b0-f2fab18a38f9" (UID: "50b275d2-6236-4076-95b0-f2fab18a38f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.329382 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8s28s"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.329573 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrr4x\" (UniqueName: \"kubernetes.io/projected/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-kube-api-access-wrr4x\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.329707 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aec2d73b-e942-4f98-9b84-539bcc3e6fa8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.329721 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b275d2-6236-4076-95b0-f2fab18a38f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.329731 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xctnm\" (UniqueName: \"kubernetes.io/projected/139fa8a2-b6c5-4624-9003-d418fdd22d55-kube-api-access-xctnm\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.340543 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "139fa8a2-b6c5-4624-9003-d418fdd22d55" (UID: "139fa8a2-b6c5-4624-9003-d418fdd22d55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.431037 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139fa8a2-b6c5-4624-9003-d418fdd22d55-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.708327 4760 generic.go:334] "Generic (PLEG): container finished" podID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerID="04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02" exitCode=0 Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.708471 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.709455 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" event={"ID":"aec2d73b-e942-4f98-9b84-539bcc3e6fa8","Type":"ContainerDied","Data":"04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.709502 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-km6r5" event={"ID":"aec2d73b-e942-4f98-9b84-539bcc3e6fa8","Type":"ContainerDied","Data":"59aa9e81f4aa1e230e96bc705f1534c00178548244a1e2908c787139a91edc68"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.709522 4760 scope.go:117] "RemoveContainer" containerID="04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.712425 4760 generic.go:334] "Generic (PLEG): container finished" podID="dce383dd-3389-41fe-9223-ed5911c789fa" containerID="17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e" exitCode=0 Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.712471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v89q8" event={"ID":"dce383dd-3389-41fe-9223-ed5911c789fa","Type":"ContainerDied","Data":"17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.712502 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v89q8" event={"ID":"dce383dd-3389-41fe-9223-ed5911c789fa","Type":"ContainerDied","Data":"f34671d567f6ff0d8d538539913d0e94ed9a05d154f429baff7d24e6940b4ec8"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.712564 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v89q8" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.717578 4760 generic.go:334] "Generic (PLEG): container finished" podID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerID="765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82" exitCode=0 Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.717639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz6d4" event={"ID":"50b275d2-6236-4076-95b0-f2fab18a38f9","Type":"ContainerDied","Data":"765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.717664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz6d4" event={"ID":"50b275d2-6236-4076-95b0-f2fab18a38f9","Type":"ContainerDied","Data":"eb918614b7629bcc7e253811f28ed455ad937d8714b346d5eeb115c1b6d4e656"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.717711 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz6d4" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.720881 4760 generic.go:334] "Generic (PLEG): container finished" podID="28856d66-d950-40a5-986c-0e3b0aa16949" containerID="379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8" exitCode=0 Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.720953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmt2d" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.720961 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmt2d" event={"ID":"28856d66-d950-40a5-986c-0e3b0aa16949","Type":"ContainerDied","Data":"379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.721010 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmt2d" event={"ID":"28856d66-d950-40a5-986c-0e3b0aa16949","Type":"ContainerDied","Data":"d5350fe477d99d69270bbea8921d5c3b104db39aa91300050319f68f551f3505"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.722502 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" event={"ID":"613c9059-f285-4892-96c6-e27686513a0a","Type":"ContainerStarted","Data":"27d6c7c42f21d9420e6af62a3817fc12797362ff1eb9b744c88ca42025fbacfe"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.722555 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" event={"ID":"613c9059-f285-4892-96c6-e27686513a0a","Type":"ContainerStarted","Data":"39b6767a09437ea5a3da3703ea12a6440294601cb7951c014ff0c108bfb1e2b6"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.723019 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.726268 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.734751 4760 scope.go:117] "RemoveContainer" containerID="04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.736393 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02\": container with ID starting with 04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02 not found: ID does not exist" containerID="04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.736437 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02"} err="failed to get container status \"04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02\": rpc error: code = NotFound desc = could not find container \"04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02\": container with ID starting with 04657c01453def3d622b339ce578b67b33495e27f22397718522a9932b081a02 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.736467 4760 scope.go:117] "RemoveContainer" containerID="17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.736979 4760 generic.go:334] "Generic (PLEG): container finished" podID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerID="e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514" exitCode=0 Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.737048 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pr5fk" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.737045 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr5fk" event={"ID":"139fa8a2-b6c5-4624-9003-d418fdd22d55","Type":"ContainerDied","Data":"e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.737330 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pr5fk" event={"ID":"139fa8a2-b6c5-4624-9003-d418fdd22d55","Type":"ContainerDied","Data":"9c2dd9ae3c78432fe9802e3a685529206ae85515e3dad135705b4891f3e7b3ea"} Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.763433 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8s28s" podStartSLOduration=1.7634060919999999 podStartE2EDuration="1.763406092s" podCreationTimestamp="2025-11-25 08:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:15:22.742311449 +0000 UTC m=+256.451342244" watchObservedRunningTime="2025-11-25 08:15:22.763406092 +0000 UTC m=+256.472436897" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.767335 4760 scope.go:117] "RemoveContainer" containerID="3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.768686 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-km6r5"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.776624 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-km6r5"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.781655 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz6d4"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.792927 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qz6d4"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.796291 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmt2d"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.799114 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmt2d"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.816429 4760 scope.go:117] "RemoveContainer" containerID="b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.838654 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v89q8"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.842961 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v89q8"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.850952 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pr5fk"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.864175 4760 scope.go:117] "RemoveContainer" containerID="17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.865187 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e\": container with ID starting with 17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e not found: ID does not exist" containerID="17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.865234 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e"} err="failed to get container status \"17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e\": rpc error: code = NotFound desc = could not find container \"17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e\": container with ID starting with 17f5b1b571a2fb938741b0b03576a78da9cc5776030da6daa4159c1cd13a601e not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.865280 4760 scope.go:117] "RemoveContainer" containerID="3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.865690 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13\": container with ID starting with 3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13 not found: ID does not exist" containerID="3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.865733 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13"} err="failed to get container status \"3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13\": rpc error: code = NotFound desc = could not find container \"3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13\": container with ID starting with 3d477e565b434f5ccab20ca67cfaa47fb001dc6ac9ba603f4010e2bfcd4cdb13 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.865761 4760 scope.go:117] "RemoveContainer" containerID="b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.866090 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22\": container with ID starting with b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22 not found: ID does not exist" containerID="b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.866114 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22"} err="failed to get container status \"b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22\": rpc error: code = NotFound desc = could not find container \"b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22\": container with ID starting with b0191a46ce2b06a884e5fb43d9d032d403c326ef4627be8828063b21cb8eee22 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.866139 4760 scope.go:117] "RemoveContainer" containerID="765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.867314 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pr5fk"] Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.892586 4760 scope.go:117] "RemoveContainer" containerID="9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.903823 4760 scope.go:117] "RemoveContainer" containerID="25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.920199 4760 scope.go:117] "RemoveContainer" containerID="765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.920774 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82\": container with ID starting with 765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82 not found: ID does not exist" containerID="765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.920827 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82"} err="failed to get container status \"765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82\": rpc error: code = NotFound desc = could not find container \"765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82\": container with ID starting with 765e89347cbab264900857f873ed134f2cbfd2a05834db8c937fd6691531fc82 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.920859 4760 scope.go:117] "RemoveContainer" containerID="9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.921381 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0\": container with ID starting with 9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0 not found: ID does not exist" containerID="9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.921406 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0"} err="failed to get container status \"9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0\": rpc error: code = NotFound desc = could not find container \"9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0\": container with ID starting with 9af7b6934f519b9998d22b8136a6f00322b7b6320b05b87cb3cd336a6de815d0 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.921427 4760 scope.go:117] "RemoveContainer" containerID="25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.921680 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300\": container with ID starting with 25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300 not found: ID does not exist" containerID="25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.921699 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300"} err="failed to get container status \"25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300\": rpc error: code = NotFound desc = could not find container \"25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300\": container with ID starting with 25c977f44d823e424e95e84293c94f908ee2544d76e7b7ca8fe3678a88aa5300 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.921711 4760 scope.go:117] "RemoveContainer" containerID="379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.942398 4760 scope.go:117] "RemoveContainer" containerID="f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.944491 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" path="/var/lib/kubelet/pods/139fa8a2-b6c5-4624-9003-d418fdd22d55/volumes" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.945127 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" path="/var/lib/kubelet/pods/28856d66-d950-40a5-986c-0e3b0aa16949/volumes" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.945796 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" path="/var/lib/kubelet/pods/50b275d2-6236-4076-95b0-f2fab18a38f9/volumes" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.947279 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" path="/var/lib/kubelet/pods/aec2d73b-e942-4f98-9b84-539bcc3e6fa8/volumes" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.947829 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" path="/var/lib/kubelet/pods/dce383dd-3389-41fe-9223-ed5911c789fa/volumes" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.961807 4760 scope.go:117] "RemoveContainer" containerID="22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.974757 4760 scope.go:117] "RemoveContainer" containerID="379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.974993 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8\": container with ID starting with 379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8 not found: ID does not exist" containerID="379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.975025 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8"} err="failed to get container status \"379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8\": rpc error: code = NotFound desc = could not find container \"379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8\": container with ID starting with 379c791383f5289b69f1d676234831c6804fdf5ae5c0ff293e5bf47ba2fd09e8 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.975047 4760 scope.go:117] "RemoveContainer" containerID="f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.975425 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d\": container with ID starting with f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d not found: ID does not exist" containerID="f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.975448 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d"} err="failed to get container status \"f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d\": rpc error: code = NotFound desc = could not find container \"f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d\": container with ID starting with f1b704292c11aa22c77efcebeead6238f903516506b333c9dc3ed13eda15c16d not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.975461 4760 scope.go:117] "RemoveContainer" containerID="22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030" Nov 25 08:15:22 crc kubenswrapper[4760]: E1125 08:15:22.975725 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030\": container with ID starting with 22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030 not found: ID does not exist" containerID="22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.975743 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030"} err="failed to get container status \"22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030\": rpc error: code = NotFound desc = could not find container \"22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030\": container with ID starting with 22aa3e731b13e1169cdb26ffaf3f0a6b5ab5c1724bafa2a13d47a5ff8813e030 not found: ID does not exist" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.975757 4760 scope.go:117] "RemoveContainer" containerID="e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514" Nov 25 08:15:22 crc kubenswrapper[4760]: I1125 08:15:22.989598 4760 scope.go:117] "RemoveContainer" containerID="2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.003314 4760 scope.go:117] "RemoveContainer" containerID="62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.017226 4760 scope.go:117] "RemoveContainer" containerID="e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.018076 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514\": container with ID starting with e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514 not found: ID does not exist" containerID="e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.018149 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514"} err="failed to get container status \"e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514\": rpc error: code = NotFound desc = could not find container \"e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514\": container with ID starting with e9a81660d921cda95b1d351249cdd13c6ec989ccf954e3c8265f27361c48b514 not found: ID does not exist" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.018187 4760 scope.go:117] "RemoveContainer" containerID="2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.019169 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0\": container with ID starting with 2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0 not found: ID does not exist" containerID="2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.019231 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0"} err="failed to get container status \"2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0\": rpc error: code = NotFound desc = could not find container \"2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0\": container with ID starting with 2a0d6ea8a967e7982382077fe1df718e1fde1bf820a18c95dbf7e40856fa69d0 not found: ID does not exist" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.019288 4760 scope.go:117] "RemoveContainer" containerID="62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.020395 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991\": container with ID starting with 62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991 not found: ID does not exist" containerID="62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.020414 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991"} err="failed to get container status \"62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991\": rpc error: code = NotFound desc = could not find container \"62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991\": container with ID starting with 62854bc46df05abe76f93589b487e0b78ffc63f23cfbc461802219fe42c8e991 not found: ID does not exist" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816382 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x6k2l"] Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816606 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816620 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816632 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816640 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816655 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816663 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816674 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816684 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816699 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816708 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816717 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816725 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816737 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816745 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816757 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816765 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816776 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816784 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816798 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816808 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="extract-content" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816821 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerName="marketplace-operator" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816831 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerName="marketplace-operator" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816844 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816855 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: E1125 08:15:23.816867 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816876 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="extract-utilities" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.816987 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="139fa8a2-b6c5-4624-9003-d418fdd22d55" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.817000 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce383dd-3389-41fe-9223-ed5911c789fa" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.817011 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2d73b-e942-4f98-9b84-539bcc3e6fa8" containerName="marketplace-operator" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.817021 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="28856d66-d950-40a5-986c-0e3b0aa16949" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.817032 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b275d2-6236-4076-95b0-f2fab18a38f9" containerName="registry-server" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.818569 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.820634 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.826676 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6k2l"] Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.948533 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkgq\" (UniqueName: \"kubernetes.io/projected/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-kube-api-access-kxkgq\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.948600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-catalog-content\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:23 crc kubenswrapper[4760]: I1125 08:15:23.948669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-utilities\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.014865 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8skdl"] Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.015763 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.017639 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.024147 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8skdl"] Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.050430 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkgq\" (UniqueName: \"kubernetes.io/projected/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-kube-api-access-kxkgq\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.050510 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-catalog-content\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.050572 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-utilities\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.051281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-catalog-content\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.051359 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-utilities\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.074188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkgq\" (UniqueName: \"kubernetes.io/projected/41eb0ddf-5d08-46bc-b6d4-59f6f86369e6-kube-api-access-kxkgq\") pod \"redhat-marketplace-x6k2l\" (UID: \"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6\") " pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.143461 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.152136 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-utilities\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.152190 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-catalog-content\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.152232 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7j6\" (UniqueName: \"kubernetes.io/projected/b7304e75-6f0d-481d-8fbc-5de0e061032d-kube-api-access-kf7j6\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.253040 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-catalog-content\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.253487 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7j6\" (UniqueName: \"kubernetes.io/projected/b7304e75-6f0d-481d-8fbc-5de0e061032d-kube-api-access-kf7j6\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.253758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-utilities\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.254216 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-utilities\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.255390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-catalog-content\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.276045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7j6\" (UniqueName: \"kubernetes.io/projected/b7304e75-6f0d-481d-8fbc-5de0e061032d-kube-api-access-kf7j6\") pod \"redhat-operators-8skdl\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.320454 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6k2l"] Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.336479 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.552304 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8skdl"] Nov 25 08:15:24 crc kubenswrapper[4760]: W1125 08:15:24.556602 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7304e75_6f0d_481d_8fbc_5de0e061032d.slice/crio-0c8381eba06f54665a16e53aee3e5b85729548dc00b50ec6ab0c7e352774a2cb WatchSource:0}: Error finding container 0c8381eba06f54665a16e53aee3e5b85729548dc00b50ec6ab0c7e352774a2cb: Status 404 returned error can't find the container with id 0c8381eba06f54665a16e53aee3e5b85729548dc00b50ec6ab0c7e352774a2cb Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.753156 4760 generic.go:334] "Generic (PLEG): container finished" podID="41eb0ddf-5d08-46bc-b6d4-59f6f86369e6" containerID="de4a08d752683ae55ec7d58546eaf7a340d26135434f3e00903394a239f80c9d" exitCode=0 Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.753233 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6k2l" event={"ID":"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6","Type":"ContainerDied","Data":"de4a08d752683ae55ec7d58546eaf7a340d26135434f3e00903394a239f80c9d"} Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.753279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6k2l" event={"ID":"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6","Type":"ContainerStarted","Data":"ab4b6f1505285daf94735eb10b4b46de4a02252f4d11c32de9de31799a76cf8e"} Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.755927 4760 generic.go:334] "Generic (PLEG): container finished" podID="b7304e75-6f0d-481d-8fbc-5de0e061032d" containerID="6a00914b25a9eb2add652e1f4ad95168169034985f53ea5a9b773914f49e724e" exitCode=0 Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.756048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8skdl" event={"ID":"b7304e75-6f0d-481d-8fbc-5de0e061032d","Type":"ContainerDied","Data":"6a00914b25a9eb2add652e1f4ad95168169034985f53ea5a9b773914f49e724e"} Nov 25 08:15:24 crc kubenswrapper[4760]: I1125 08:15:24.756080 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8skdl" event={"ID":"b7304e75-6f0d-481d-8fbc-5de0e061032d","Type":"ContainerStarted","Data":"0c8381eba06f54665a16e53aee3e5b85729548dc00b50ec6ab0c7e352774a2cb"} Nov 25 08:15:25 crc kubenswrapper[4760]: I1125 08:15:25.764769 4760 generic.go:334] "Generic (PLEG): container finished" podID="41eb0ddf-5d08-46bc-b6d4-59f6f86369e6" containerID="38f313edb13e9ebf01b9183a24bee673a3be273e664f4fd80323497c92c714ae" exitCode=0 Nov 25 08:15:25 crc kubenswrapper[4760]: I1125 08:15:25.764962 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6k2l" event={"ID":"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6","Type":"ContainerDied","Data":"38f313edb13e9ebf01b9183a24bee673a3be273e664f4fd80323497c92c714ae"} Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.216161 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7mrhl"] Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.219932 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.224309 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.229621 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mrhl"] Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.279581 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-utilities\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.280325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-catalog-content\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.280424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr9tv\" (UniqueName: \"kubernetes.io/projected/02d0ec21-fa37-4499-8173-5821ec88a61f-kube-api-access-jr9tv\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.381413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-utilities\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.381575 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-catalog-content\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.381635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr9tv\" (UniqueName: \"kubernetes.io/projected/02d0ec21-fa37-4499-8173-5821ec88a61f-kube-api-access-jr9tv\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.382057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-utilities\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.382366 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-catalog-content\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.409795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr9tv\" (UniqueName: \"kubernetes.io/projected/02d0ec21-fa37-4499-8173-5821ec88a61f-kube-api-access-jr9tv\") pod \"community-operators-7mrhl\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.419724 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chml5"] Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.420996 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.423019 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.437013 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chml5"] Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.485081 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxbn\" (UniqueName: \"kubernetes.io/projected/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-kube-api-access-xxxbn\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.485161 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-catalog-content\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.485238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-utilities\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.580862 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.586640 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxbn\" (UniqueName: \"kubernetes.io/projected/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-kube-api-access-xxxbn\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.586715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-catalog-content\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.586769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-utilities\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.587395 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-utilities\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.588448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-catalog-content\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.616461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxbn\" (UniqueName: \"kubernetes.io/projected/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-kube-api-access-xxxbn\") pod \"certified-operators-chml5\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.767522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.792136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6k2l" event={"ID":"41eb0ddf-5d08-46bc-b6d4-59f6f86369e6","Type":"ContainerStarted","Data":"fd464c1f6e6c045ea3789748992e030740b1ace8c192876c845904eb591f3374"} Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.798775 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8skdl" event={"ID":"b7304e75-6f0d-481d-8fbc-5de0e061032d","Type":"ContainerStarted","Data":"d66ae8524eb9cfc9f95235391796b05c183f86405cd397c0231f193ea0423c28"} Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.858591 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x6k2l" podStartSLOduration=2.3190447069999998 podStartE2EDuration="3.858566391s" podCreationTimestamp="2025-11-25 08:15:23 +0000 UTC" firstStartedPulling="2025-11-25 08:15:24.754359619 +0000 UTC m=+258.463390414" lastFinishedPulling="2025-11-25 08:15:26.293881303 +0000 UTC m=+260.002912098" observedRunningTime="2025-11-25 08:15:26.819938901 +0000 UTC m=+260.528969706" watchObservedRunningTime="2025-11-25 08:15:26.858566391 +0000 UTC m=+260.567597186" Nov 25 08:15:26 crc kubenswrapper[4760]: I1125 08:15:26.864428 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mrhl"] Nov 25 08:15:26 crc kubenswrapper[4760]: W1125 08:15:26.876167 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02d0ec21_fa37_4499_8173_5821ec88a61f.slice/crio-85134449014b1357c687ab506a5416a5804e50ab7d0293b205fe77529c1b3730 WatchSource:0}: Error finding container 85134449014b1357c687ab506a5416a5804e50ab7d0293b205fe77529c1b3730: Status 404 returned error can't find the container with id 85134449014b1357c687ab506a5416a5804e50ab7d0293b205fe77529c1b3730 Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.209097 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chml5"] Nov 25 08:15:27 crc kubenswrapper[4760]: W1125 08:15:27.213644 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36bfebb6_11e8_4a9d_9bb2_490ae4405cd0.slice/crio-677d9f45b96d0f024c9351b41186d1e444105e69b8aa138b0e4463cb9fa4ed8c WatchSource:0}: Error finding container 677d9f45b96d0f024c9351b41186d1e444105e69b8aa138b0e4463cb9fa4ed8c: Status 404 returned error can't find the container with id 677d9f45b96d0f024c9351b41186d1e444105e69b8aa138b0e4463cb9fa4ed8c Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.807863 4760 generic.go:334] "Generic (PLEG): container finished" podID="b7304e75-6f0d-481d-8fbc-5de0e061032d" containerID="d66ae8524eb9cfc9f95235391796b05c183f86405cd397c0231f193ea0423c28" exitCode=0 Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.807948 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8skdl" event={"ID":"b7304e75-6f0d-481d-8fbc-5de0e061032d","Type":"ContainerDied","Data":"d66ae8524eb9cfc9f95235391796b05c183f86405cd397c0231f193ea0423c28"} Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.809992 4760 generic.go:334] "Generic (PLEG): container finished" podID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerID="57b74452faade2c3c36321c26262dc91d7e29e740fd3ca9ae89ad845e8965e1a" exitCode=0 Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.810062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chml5" event={"ID":"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0","Type":"ContainerDied","Data":"57b74452faade2c3c36321c26262dc91d7e29e740fd3ca9ae89ad845e8965e1a"} Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.810088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chml5" event={"ID":"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0","Type":"ContainerStarted","Data":"677d9f45b96d0f024c9351b41186d1e444105e69b8aa138b0e4463cb9fa4ed8c"} Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.815880 4760 generic.go:334] "Generic (PLEG): container finished" podID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerID="55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4" exitCode=0 Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.815981 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mrhl" event={"ID":"02d0ec21-fa37-4499-8173-5821ec88a61f","Type":"ContainerDied","Data":"55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4"} Nov 25 08:15:27 crc kubenswrapper[4760]: I1125 08:15:27.816004 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mrhl" event={"ID":"02d0ec21-fa37-4499-8173-5821ec88a61f","Type":"ContainerStarted","Data":"85134449014b1357c687ab506a5416a5804e50ab7d0293b205fe77529c1b3730"} Nov 25 08:15:28 crc kubenswrapper[4760]: I1125 08:15:28.824160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8skdl" event={"ID":"b7304e75-6f0d-481d-8fbc-5de0e061032d","Type":"ContainerStarted","Data":"1e9ff90afd0276a4a4143c71e255cc7fa37b8d56d49b03cbf206eabaad74b26d"} Nov 25 08:15:28 crc kubenswrapper[4760]: I1125 08:15:28.829431 4760 generic.go:334] "Generic (PLEG): container finished" podID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerID="83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b" exitCode=0 Nov 25 08:15:28 crc kubenswrapper[4760]: I1125 08:15:28.829625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mrhl" event={"ID":"02d0ec21-fa37-4499-8173-5821ec88a61f","Type":"ContainerDied","Data":"83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b"} Nov 25 08:15:28 crc kubenswrapper[4760]: I1125 08:15:28.843735 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8skdl" podStartSLOduration=1.059768114 podStartE2EDuration="4.843713186s" podCreationTimestamp="2025-11-25 08:15:24 +0000 UTC" firstStartedPulling="2025-11-25 08:15:24.758207023 +0000 UTC m=+258.467237818" lastFinishedPulling="2025-11-25 08:15:28.542152095 +0000 UTC m=+262.251182890" observedRunningTime="2025-11-25 08:15:28.841217043 +0000 UTC m=+262.550247838" watchObservedRunningTime="2025-11-25 08:15:28.843713186 +0000 UTC m=+262.552743981" Nov 25 08:15:29 crc kubenswrapper[4760]: E1125 08:15:29.204653 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36bfebb6_11e8_4a9d_9bb2_490ae4405cd0.slice/crio-conmon-5d6b822026d2709b772adce245c31d38fcf4f66bd45c7ece11a5cea2d576058f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36bfebb6_11e8_4a9d_9bb2_490ae4405cd0.slice/crio-5d6b822026d2709b772adce245c31d38fcf4f66bd45c7ece11a5cea2d576058f.scope\": RecentStats: unable to find data in memory cache]" Nov 25 08:15:29 crc kubenswrapper[4760]: I1125 08:15:29.836080 4760 generic.go:334] "Generic (PLEG): container finished" podID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerID="5d6b822026d2709b772adce245c31d38fcf4f66bd45c7ece11a5cea2d576058f" exitCode=0 Nov 25 08:15:29 crc kubenswrapper[4760]: I1125 08:15:29.836174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chml5" event={"ID":"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0","Type":"ContainerDied","Data":"5d6b822026d2709b772adce245c31d38fcf4f66bd45c7ece11a5cea2d576058f"} Nov 25 08:15:30 crc kubenswrapper[4760]: I1125 08:15:30.850697 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mrhl" event={"ID":"02d0ec21-fa37-4499-8173-5821ec88a61f","Type":"ContainerStarted","Data":"65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85"} Nov 25 08:15:30 crc kubenswrapper[4760]: I1125 08:15:30.870520 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7mrhl" podStartSLOduration=2.999934048 podStartE2EDuration="4.870500631s" podCreationTimestamp="2025-11-25 08:15:26 +0000 UTC" firstStartedPulling="2025-11-25 08:15:27.817082864 +0000 UTC m=+261.526113659" lastFinishedPulling="2025-11-25 08:15:29.687649447 +0000 UTC m=+263.396680242" observedRunningTime="2025-11-25 08:15:30.867690468 +0000 UTC m=+264.576721263" watchObservedRunningTime="2025-11-25 08:15:30.870500631 +0000 UTC m=+264.579531426" Nov 25 08:15:31 crc kubenswrapper[4760]: I1125 08:15:31.859022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chml5" event={"ID":"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0","Type":"ContainerStarted","Data":"9ab2c47fd9e64da1d5984e2d3a93d33df5d5e68de70a9d5b6d9b1bf909e7d0f7"} Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.144045 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.144392 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.183416 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.203161 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chml5" podStartSLOduration=5.594033828 podStartE2EDuration="8.203129123s" podCreationTimestamp="2025-11-25 08:15:26 +0000 UTC" firstStartedPulling="2025-11-25 08:15:27.814436316 +0000 UTC m=+261.523467111" lastFinishedPulling="2025-11-25 08:15:30.423531611 +0000 UTC m=+264.132562406" observedRunningTime="2025-11-25 08:15:31.87814355 +0000 UTC m=+265.587174365" watchObservedRunningTime="2025-11-25 08:15:34.203129123 +0000 UTC m=+267.912159928" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.337613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.337691 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.380402 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.914887 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x6k2l" Nov 25 08:15:34 crc kubenswrapper[4760]: I1125 08:15:34.925321 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.582348 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.582731 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.621647 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.768545 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.768611 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.809625 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.922825 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chml5" Nov 25 08:15:36 crc kubenswrapper[4760]: I1125 08:15:36.928613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:17:31 crc kubenswrapper[4760]: I1125 08:17:31.746619 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:17:31 crc kubenswrapper[4760]: I1125 08:17:31.748366 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:18:01 crc kubenswrapper[4760]: I1125 08:18:01.746819 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:18:01 crc kubenswrapper[4760]: I1125 08:18:01.747455 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:18:23 crc kubenswrapper[4760]: I1125 08:18:23.860834 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qgtvx"] Nov 25 08:18:23 crc kubenswrapper[4760]: I1125 08:18:23.862216 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:23 crc kubenswrapper[4760]: I1125 08:18:23.880522 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qgtvx"] Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.061954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.062056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-bound-sa-token\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.062087 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5cf\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-kube-api-access-st5cf\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.062143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-registry-tls\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.062191 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-registry-certificates\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.062212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-trusted-ca\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.062314 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.062346 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.089268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.163663 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-bound-sa-token\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.163715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5cf\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-kube-api-access-st5cf\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.163742 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-registry-tls\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.163770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-registry-certificates\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.163803 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-trusted-ca\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.163860 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.163889 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.164708 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.165223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-registry-certificates\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.165235 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-trusted-ca\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.175403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.176013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-registry-tls\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.182336 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-bound-sa-token\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.182911 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5cf\" (UniqueName: \"kubernetes.io/projected/c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31-kube-api-access-st5cf\") pod \"image-registry-66df7c8f76-qgtvx\" (UID: \"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31\") " pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.480229 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.669492 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qgtvx"] Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.897795 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" event={"ID":"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31","Type":"ContainerStarted","Data":"3f9adf0f5e9dbd4419645d3e968f74f4beeaedd8cfcc45aa7e9713b59b561089"} Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.897843 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" event={"ID":"c3f84fd3-5ffd-4b53-91a8-e3d8c0ccbb31","Type":"ContainerStarted","Data":"23eaadc07ef0ff9b20be26a5580af0d0149f0f59f3d2045324008afc035adf5e"} Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.898046 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:24 crc kubenswrapper[4760]: I1125 08:18:24.916417 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" podStartSLOduration=1.916400476 podStartE2EDuration="1.916400476s" podCreationTimestamp="2025-11-25 08:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:18:24.915103268 +0000 UTC m=+438.624134063" watchObservedRunningTime="2025-11-25 08:18:24.916400476 +0000 UTC m=+438.625431281" Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.746124 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.747125 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.747200 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.748161 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"308caa4dc448cb9739c473fcfee251cdc29a87eaebc0beb1e3567269bf4c7aa2"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.748271 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://308caa4dc448cb9739c473fcfee251cdc29a87eaebc0beb1e3567269bf4c7aa2" gracePeriod=600 Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.938748 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="308caa4dc448cb9739c473fcfee251cdc29a87eaebc0beb1e3567269bf4c7aa2" exitCode=0 Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.938800 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"308caa4dc448cb9739c473fcfee251cdc29a87eaebc0beb1e3567269bf4c7aa2"} Nov 25 08:18:31 crc kubenswrapper[4760]: I1125 08:18:31.938840 4760 scope.go:117] "RemoveContainer" containerID="4c0b40225351b3bd05db69752da590196e0758602df02e4ed1767d6d8572c284" Nov 25 08:18:32 crc kubenswrapper[4760]: I1125 08:18:32.946301 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"8ea91d6699ab5d174bc8311b29a2b59a97368218bd86cb03b23aecea38616074"} Nov 25 08:18:44 crc kubenswrapper[4760]: I1125 08:18:44.486533 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qgtvx" Nov 25 08:18:44 crc kubenswrapper[4760]: I1125 08:18:44.538408 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fcw7b"] Nov 25 08:19:09 crc kubenswrapper[4760]: I1125 08:19:09.575056 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" podUID="584213d2-6225-4cab-b558-22d0b9990cd8" containerName="registry" containerID="cri-o://b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c" gracePeriod=30 Nov 25 08:19:09 crc kubenswrapper[4760]: I1125 08:19:09.941043 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.065611 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-registry-tls\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.065675 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-registry-certificates\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.065725 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-bound-sa-token\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.066449 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.066507 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/584213d2-6225-4cab-b558-22d0b9990cd8-ca-trust-extracted\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.066601 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm974\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-kube-api-access-cm974\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.066815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.066880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-trusted-ca\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.066915 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/584213d2-6225-4cab-b558-22d0b9990cd8-installation-pull-secrets\") pod \"584213d2-6225-4cab-b558-22d0b9990cd8\" (UID: \"584213d2-6225-4cab-b558-22d0b9990cd8\") " Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.067398 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.067917 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.071273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.071705 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-kube-api-access-cm974" (OuterVolumeSpecName: "kube-api-access-cm974") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "kube-api-access-cm974". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.073532 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584213d2-6225-4cab-b558-22d0b9990cd8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.073616 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.079290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.084796 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584213d2-6225-4cab-b558-22d0b9990cd8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "584213d2-6225-4cab-b558-22d0b9990cd8" (UID: "584213d2-6225-4cab-b558-22d0b9990cd8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.140981 4760 generic.go:334] "Generic (PLEG): container finished" podID="584213d2-6225-4cab-b558-22d0b9990cd8" containerID="b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c" exitCode=0 Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.141031 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" event={"ID":"584213d2-6225-4cab-b558-22d0b9990cd8","Type":"ContainerDied","Data":"b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c"} Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.141064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" event={"ID":"584213d2-6225-4cab-b558-22d0b9990cd8","Type":"ContainerDied","Data":"af99955854d2e6f4f5fce89c965c8b5552c178bb962059cb4d5b37f5308f23f1"} Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.141063 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fcw7b" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.141083 4760 scope.go:117] "RemoveContainer" containerID="b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.157401 4760 scope.go:117] "RemoveContainer" containerID="b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c" Nov 25 08:19:10 crc kubenswrapper[4760]: E1125 08:19:10.157795 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c\": container with ID starting with b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c not found: ID does not exist" containerID="b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.157836 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c"} err="failed to get container status \"b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c\": rpc error: code = NotFound desc = could not find container \"b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c\": container with ID starting with b54125cb50d13ed2717b3aff3a40ffa1ee2f0147b5035b7f614995d5a1d2433c not found: ID does not exist" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.168930 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.168965 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.168975 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/584213d2-6225-4cab-b558-22d0b9990cd8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.168986 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm974\" (UniqueName: \"kubernetes.io/projected/584213d2-6225-4cab-b558-22d0b9990cd8-kube-api-access-cm974\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.168996 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/584213d2-6225-4cab-b558-22d0b9990cd8-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.169026 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/584213d2-6225-4cab-b558-22d0b9990cd8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.169037 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fcw7b"] Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.171919 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fcw7b"] Nov 25 08:19:10 crc kubenswrapper[4760]: I1125 08:19:10.945969 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584213d2-6225-4cab-b558-22d0b9990cd8" path="/var/lib/kubelet/pods/584213d2-6225-4cab-b558-22d0b9990cd8/volumes" Nov 25 08:20:07 crc kubenswrapper[4760]: I1125 08:20:07.074026 4760 scope.go:117] "RemoveContainer" containerID="cd99c9a530d8cf9d7fb8fd782cab216f788be53b7d5253abd0c9feb62f49df1f" Nov 25 08:21:01 crc kubenswrapper[4760]: I1125 08:21:01.746548 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:21:01 crc kubenswrapper[4760]: I1125 08:21:01.747133 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:21:07 crc kubenswrapper[4760]: I1125 08:21:07.112627 4760 scope.go:117] "RemoveContainer" containerID="ae2144fc5d9b5177ead0aeaa08b3492547d32be15fcf60190d6d1b8c1267931d" Nov 25 08:21:07 crc kubenswrapper[4760]: I1125 08:21:07.133585 4760 scope.go:117] "RemoveContainer" containerID="48c2c0fb25cb9fc6e57655062d6ce0e7fc865c1d7c528cd2325aa33967a513ab" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.125606 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-m6mjj"] Nov 25 08:21:21 crc kubenswrapper[4760]: E1125 08:21:21.126927 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584213d2-6225-4cab-b558-22d0b9990cd8" containerName="registry" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.126949 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="584213d2-6225-4cab-b558-22d0b9990cd8" containerName="registry" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.127115 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="584213d2-6225-4cab-b558-22d0b9990cd8" containerName="registry" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.127835 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.128420 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-86mq8"] Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.129188 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-86mq8" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.132545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.132756 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.132899 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mzqnz" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.135475 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vsxvz" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.146631 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-m6mjj"] Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.162307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-86mq8"] Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.178270 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7849w"] Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.178955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.182414 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9fqjq" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.198994 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7849w"] Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.251215 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq5fl\" (UniqueName: \"kubernetes.io/projected/a6f5c6ad-5f4b-442a-9041-7f053349a0e7-kube-api-access-hq5fl\") pod \"cert-manager-5b446d88c5-86mq8\" (UID: \"a6f5c6ad-5f4b-442a-9041-7f053349a0e7\") " pod="cert-manager/cert-manager-5b446d88c5-86mq8" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.251302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjbb\" (UniqueName: \"kubernetes.io/projected/7498b2f4-5621-4e4d-8d34-d8fc09271dcf-kube-api-access-skjbb\") pod \"cert-manager-cainjector-7f985d654d-m6mjj\" (UID: \"7498b2f4-5621-4e4d-8d34-d8fc09271dcf\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.353004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq5fl\" (UniqueName: \"kubernetes.io/projected/a6f5c6ad-5f4b-442a-9041-7f053349a0e7-kube-api-access-hq5fl\") pod \"cert-manager-5b446d88c5-86mq8\" (UID: \"a6f5c6ad-5f4b-442a-9041-7f053349a0e7\") " pod="cert-manager/cert-manager-5b446d88c5-86mq8" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.353092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skjbb\" (UniqueName: \"kubernetes.io/projected/7498b2f4-5621-4e4d-8d34-d8fc09271dcf-kube-api-access-skjbb\") pod \"cert-manager-cainjector-7f985d654d-m6mjj\" (UID: \"7498b2f4-5621-4e4d-8d34-d8fc09271dcf\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.353128 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hskzt\" (UniqueName: \"kubernetes.io/projected/10171911-dbe6-4b07-a58e-07713d8112c2-kube-api-access-hskzt\") pod \"cert-manager-webhook-5655c58dd6-7849w\" (UID: \"10171911-dbe6-4b07-a58e-07713d8112c2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.375199 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq5fl\" (UniqueName: \"kubernetes.io/projected/a6f5c6ad-5f4b-442a-9041-7f053349a0e7-kube-api-access-hq5fl\") pod \"cert-manager-5b446d88c5-86mq8\" (UID: \"a6f5c6ad-5f4b-442a-9041-7f053349a0e7\") " pod="cert-manager/cert-manager-5b446d88c5-86mq8" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.384105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skjbb\" (UniqueName: \"kubernetes.io/projected/7498b2f4-5621-4e4d-8d34-d8fc09271dcf-kube-api-access-skjbb\") pod \"cert-manager-cainjector-7f985d654d-m6mjj\" (UID: \"7498b2f4-5621-4e4d-8d34-d8fc09271dcf\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.449013 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.454935 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hskzt\" (UniqueName: \"kubernetes.io/projected/10171911-dbe6-4b07-a58e-07713d8112c2-kube-api-access-hskzt\") pod \"cert-manager-webhook-5655c58dd6-7849w\" (UID: \"10171911-dbe6-4b07-a58e-07713d8112c2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.460088 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-86mq8" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.473641 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hskzt\" (UniqueName: \"kubernetes.io/projected/10171911-dbe6-4b07-a58e-07713d8112c2-kube-api-access-hskzt\") pod \"cert-manager-webhook-5655c58dd6-7849w\" (UID: \"10171911-dbe6-4b07-a58e-07713d8112c2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.491080 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.719191 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-86mq8"] Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.738721 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.761334 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-m6mjj"] Nov 25 08:21:21 crc kubenswrapper[4760]: W1125 08:21:21.770036 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7498b2f4_5621_4e4d_8d34_d8fc09271dcf.slice/crio-8a18533c4ecff4bb9ead871eac66775891f5424bf93583282a2e3ec336ac1f3c WatchSource:0}: Error finding container 8a18533c4ecff4bb9ead871eac66775891f5424bf93583282a2e3ec336ac1f3c: Status 404 returned error can't find the container with id 8a18533c4ecff4bb9ead871eac66775891f5424bf93583282a2e3ec336ac1f3c Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.790963 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7849w"] Nov 25 08:21:21 crc kubenswrapper[4760]: W1125 08:21:21.795821 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10171911_dbe6_4b07_a58e_07713d8112c2.slice/crio-ac0780c7ad28585a2e8a4da4e29171457aeebeadc9a0ef2818283b827b658345 WatchSource:0}: Error finding container ac0780c7ad28585a2e8a4da4e29171457aeebeadc9a0ef2818283b827b658345: Status 404 returned error can't find the container with id ac0780c7ad28585a2e8a4da4e29171457aeebeadc9a0ef2818283b827b658345 Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.816396 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" event={"ID":"7498b2f4-5621-4e4d-8d34-d8fc09271dcf","Type":"ContainerStarted","Data":"8a18533c4ecff4bb9ead871eac66775891f5424bf93583282a2e3ec336ac1f3c"} Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.817118 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" event={"ID":"10171911-dbe6-4b07-a58e-07713d8112c2","Type":"ContainerStarted","Data":"ac0780c7ad28585a2e8a4da4e29171457aeebeadc9a0ef2818283b827b658345"} Nov 25 08:21:21 crc kubenswrapper[4760]: I1125 08:21:21.817898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-86mq8" event={"ID":"a6f5c6ad-5f4b-442a-9041-7f053349a0e7","Type":"ContainerStarted","Data":"452370bd9182da386cad14c5d1e333e62bcc25a498eba5e66a4df50601bcb412"} Nov 25 08:21:24 crc kubenswrapper[4760]: I1125 08:21:24.833405 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" event={"ID":"10171911-dbe6-4b07-a58e-07713d8112c2","Type":"ContainerStarted","Data":"90c56b8cb8bbc7071408184788a6ff72d73ea5a50f00c10a5f1099370bc951c4"} Nov 25 08:21:24 crc kubenswrapper[4760]: I1125 08:21:24.833985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" Nov 25 08:21:24 crc kubenswrapper[4760]: I1125 08:21:24.834621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-86mq8" event={"ID":"a6f5c6ad-5f4b-442a-9041-7f053349a0e7","Type":"ContainerStarted","Data":"63a7580f99bac9edc09f3fd12a28a54be7e71711be652baa1ddeee4a9635c6ac"} Nov 25 08:21:24 crc kubenswrapper[4760]: I1125 08:21:24.849684 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" podStartSLOduration=1.3756553999999999 podStartE2EDuration="3.849667408s" podCreationTimestamp="2025-11-25 08:21:21 +0000 UTC" firstStartedPulling="2025-11-25 08:21:21.800129271 +0000 UTC m=+615.509160066" lastFinishedPulling="2025-11-25 08:21:24.274141259 +0000 UTC m=+617.983172074" observedRunningTime="2025-11-25 08:21:24.848619208 +0000 UTC m=+618.557650013" watchObservedRunningTime="2025-11-25 08:21:24.849667408 +0000 UTC m=+618.558698203" Nov 25 08:21:25 crc kubenswrapper[4760]: I1125 08:21:25.848182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" event={"ID":"7498b2f4-5621-4e4d-8d34-d8fc09271dcf","Type":"ContainerStarted","Data":"5e98f5db2c010e73c48cd3ce193e1de3189ac902b6a21c77c3adaf2f92798112"} Nov 25 08:21:25 crc kubenswrapper[4760]: I1125 08:21:25.860176 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-86mq8" podStartSLOduration=2.384379148 podStartE2EDuration="4.860158566s" podCreationTimestamp="2025-11-25 08:21:21 +0000 UTC" firstStartedPulling="2025-11-25 08:21:21.738349932 +0000 UTC m=+615.447380727" lastFinishedPulling="2025-11-25 08:21:24.21412935 +0000 UTC m=+617.923160145" observedRunningTime="2025-11-25 08:21:24.865339992 +0000 UTC m=+618.574370807" watchObservedRunningTime="2025-11-25 08:21:25.860158566 +0000 UTC m=+619.569189361" Nov 25 08:21:25 crc kubenswrapper[4760]: I1125 08:21:25.861015 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" podStartSLOduration=1.6878890439999998 podStartE2EDuration="4.86100831s" podCreationTimestamp="2025-11-25 08:21:21 +0000 UTC" firstStartedPulling="2025-11-25 08:21:21.772770229 +0000 UTC m=+615.481801024" lastFinishedPulling="2025-11-25 08:21:24.945889495 +0000 UTC m=+618.654920290" observedRunningTime="2025-11-25 08:21:25.858855878 +0000 UTC m=+619.567886683" watchObservedRunningTime="2025-11-25 08:21:25.86100831 +0000 UTC m=+619.570039105" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.328918 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c2bhp"] Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.329607 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-controller" containerID="cri-o://3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.329733 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-acl-logging" containerID="cri-o://8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.329708 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="northd" containerID="cri-o://0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.329782 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.329772 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-node" containerID="cri-o://bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.329870 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="sbdb" containerID="cri-o://da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.329883 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="nbdb" containerID="cri-o://890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.361917 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" containerID="cri-o://7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" gracePeriod=30 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.494878 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-7849w" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.686568 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/3.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.689408 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovn-acl-logging/0.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.689985 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovn-controller/0.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.690490 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.746609 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.746682 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.750890 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nwh7d"] Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751077 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kubecfg-setup" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751090 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kubecfg-setup" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751100 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="sbdb" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751106 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="sbdb" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751113 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751119 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751128 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751133 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751141 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-node" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751147 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-node" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751157 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751163 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751170 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="nbdb" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751176 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="nbdb" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751185 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751192 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751198 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751204 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751211 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751217 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751222 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-acl-logging" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751228 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-acl-logging" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751240 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="northd" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751305 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="northd" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751393 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="sbdb" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751406 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751415 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="northd" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751424 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751432 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751438 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovn-acl-logging" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751448 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-node" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751456 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751463 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751472 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="nbdb" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.751564 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751570 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751673 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.751684 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerName="ovnkube-controller" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.753945 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785137 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-ovn-kubernetes\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785204 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-var-lib-openvswitch\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785235 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-openvswitch\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785278 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-kubelet\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785279 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785339 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785338 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785349 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-ovn\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785478 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-slash\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785522 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-log-socket\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-slash" (OuterVolumeSpecName: "host-slash") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785576 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-systemd\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-log-socket" (OuterVolumeSpecName: "log-socket") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-env-overrides\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785656 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fk6n\" (UniqueName: \"kubernetes.io/projected/244c5c71-3110-4dcd-89f3-4dadfc405131-kube-api-access-2fk6n\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785691 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-script-lib\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-netd\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785748 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-config\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785782 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-var-lib-cni-networks-ovn-kubernetes\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785807 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-bin\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785834 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-node-log\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785858 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-netns\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785882 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-systemd-units\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785920 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/244c5c71-3110-4dcd-89f3-4dadfc405131-ovn-node-metrics-cert\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.785940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-etc-openvswitch\") pod \"244c5c71-3110-4dcd-89f3-4dadfc405131\" (UID: \"244c5c71-3110-4dcd-89f3-4dadfc405131\") " Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786041 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786101 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786263 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-node-log" (OuterVolumeSpecName: "node-log") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786336 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786401 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786412 4760 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786433 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786448 4760 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786460 4760 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786472 4760 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786484 4760 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786495 4760 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786506 4760 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786517 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786528 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786539 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786552 4760 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786565 4760 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.786562 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.795847 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/244c5c71-3110-4dcd-89f3-4dadfc405131-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.807635 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.808038 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244c5c71-3110-4dcd-89f3-4dadfc405131-kube-api-access-2fk6n" (OuterVolumeSpecName: "kube-api-access-2fk6n") pod "244c5c71-3110-4dcd-89f3-4dadfc405131" (UID: "244c5c71-3110-4dcd-89f3-4dadfc405131"). InnerVolumeSpecName "kube-api-access-2fk6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.879772 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/2.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.880207 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/1.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.880240 4760 generic.go:334] "Generic (PLEG): container finished" podID="29261de0-ae0c-4828-afed-e6036aa367cf" containerID="3e9a8382e6791cdaff72ff69f8e4d9f8d43d278f8f44f38094ed07a4d9a31cfd" exitCode=2 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.880303 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerDied","Data":"3e9a8382e6791cdaff72ff69f8e4d9f8d43d278f8f44f38094ed07a4d9a31cfd"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.880335 4760 scope.go:117] "RemoveContainer" containerID="ad079c1c3d242243227f6b7cde3bad1670bfc9df7ddedaebd95c95a018b2f6c5" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.880828 4760 scope.go:117] "RemoveContainer" containerID="3e9a8382e6791cdaff72ff69f8e4d9f8d43d278f8f44f38094ed07a4d9a31cfd" Nov 25 08:21:31 crc kubenswrapper[4760]: E1125 08:21:31.881159 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-x6n7t_openshift-multus(29261de0-ae0c-4828-afed-e6036aa367cf)\"" pod="openshift-multus/multus-x6n7t" podUID="29261de0-ae0c-4828-afed-e6036aa367cf" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.882663 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovnkube-controller/3.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.888670 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-kubelet\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.888720 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-run-netns\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.888748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-cni-bin\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.888765 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovnkube-config\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.888909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-systemd-units\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.888946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-systemd\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.888974 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-env-overrides\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889089 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovnkube-script-lib\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovn-node-metrics-cert\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889162 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-cni-netd\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889353 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-slash\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889438 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-var-lib-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-log-socket\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889504 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtsr\" (UniqueName: \"kubernetes.io/projected/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-kube-api-access-ggtsr\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889621 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-node-log\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-ovn\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889685 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-etc-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889919 4760 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889942 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fk6n\" (UniqueName: \"kubernetes.io/projected/244c5c71-3110-4dcd-89f3-4dadfc405131-kube-api-access-2fk6n\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889956 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/244c5c71-3110-4dcd-89f3-4dadfc405131-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889974 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889984 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.889995 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/244c5c71-3110-4dcd-89f3-4dadfc405131-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.890008 4760 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/244c5c71-3110-4dcd-89f3-4dadfc405131-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.890313 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovn-acl-logging/0.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.890894 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c2bhp_244c5c71-3110-4dcd-89f3-4dadfc405131/ovn-controller/0.log" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891210 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" exitCode=0 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891237 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" exitCode=0 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891258 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" exitCode=0 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891270 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" exitCode=0 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891279 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" exitCode=0 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891288 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" exitCode=0 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891298 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" exitCode=143 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891307 4760 generic.go:334] "Generic (PLEG): container finished" podID="244c5c71-3110-4dcd-89f3-4dadfc405131" containerID="3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" exitCode=143 Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891331 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891379 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891409 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891426 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891441 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891455 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891461 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891467 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891474 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891480 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891485 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891492 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891498 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891505 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891522 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891528 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891535 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891542 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891549 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891556 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891562 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891569 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891577 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891582 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891590 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891598 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891605 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891612 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891620 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891626 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891632 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891645 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891651 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891657 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891663 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" event={"ID":"244c5c71-3110-4dcd-89f3-4dadfc405131","Type":"ContainerDied","Data":"8806f0460a7db9cbd2ee718905b96b5e8f5048f68ac5117b85d7fe16613e7222"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891678 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891686 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891692 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891698 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891704 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891710 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891715 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891721 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891726 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891731 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.891833 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2bhp" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.917988 4760 scope.go:117] "RemoveContainer" containerID="7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.923092 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c2bhp"] Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.927065 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c2bhp"] Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.938581 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.954485 4760 scope.go:117] "RemoveContainer" containerID="da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.969138 4760 scope.go:117] "RemoveContainer" containerID="890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.984492 4760 scope.go:117] "RemoveContainer" containerID="0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-ovn\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-etc-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-kubelet\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-ovn\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990722 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-run-netns\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990744 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-cni-bin\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990739 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-cni-bin\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990775 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-run-netns\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990773 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-etc-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990822 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-kubelet\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovnkube-config\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-systemd-units\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-systemd\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-env-overrides\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.990997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovnkube-script-lib\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovn-node-metrics-cert\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-cni-netd\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991088 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-slash\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991113 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-var-lib-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991138 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-log-socket\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtsr\" (UniqueName: \"kubernetes.io/projected/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-kube-api-access-ggtsr\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-node-log\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991277 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-node-log\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991308 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-systemd-units\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991327 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-run-systemd\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991410 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-log-socket\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-slash\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991447 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-host-cni-netd\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-var-lib-openvswitch\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovnkube-config\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.991819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-env-overrides\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.992100 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovnkube-script-lib\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:31 crc kubenswrapper[4760]: I1125 08:21:31.994894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-ovn-node-metrics-cert\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.001526 4760 scope.go:117] "RemoveContainer" containerID="858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.008427 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtsr\" (UniqueName: \"kubernetes.io/projected/b444b5f2-6f06-41c8-b5bc-a4642c1bc60b-kube-api-access-ggtsr\") pod \"ovnkube-node-nwh7d\" (UID: \"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.016412 4760 scope.go:117] "RemoveContainer" containerID="bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.026487 4760 scope.go:117] "RemoveContainer" containerID="8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.037782 4760 scope.go:117] "RemoveContainer" containerID="3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.049336 4760 scope.go:117] "RemoveContainer" containerID="68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.059078 4760 scope.go:117] "RemoveContainer" containerID="7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.059956 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": container with ID starting with 7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af not found: ID does not exist" containerID="7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060010 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} err="failed to get container status \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": rpc error: code = NotFound desc = could not find container \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": container with ID starting with 7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060044 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.060356 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": container with ID starting with 3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2 not found: ID does not exist" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060374 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} err="failed to get container status \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": rpc error: code = NotFound desc = could not find container \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": container with ID starting with 3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060386 4760 scope.go:117] "RemoveContainer" containerID="da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.060592 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": container with ID starting with da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d not found: ID does not exist" containerID="da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060613 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} err="failed to get container status \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": rpc error: code = NotFound desc = could not find container \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": container with ID starting with da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060625 4760 scope.go:117] "RemoveContainer" containerID="890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.060840 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": container with ID starting with 890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d not found: ID does not exist" containerID="890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060857 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} err="failed to get container status \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": rpc error: code = NotFound desc = could not find container \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": container with ID starting with 890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.060869 4760 scope.go:117] "RemoveContainer" containerID="0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.061077 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": container with ID starting with 0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f not found: ID does not exist" containerID="0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061096 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} err="failed to get container status \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": rpc error: code = NotFound desc = could not find container \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": container with ID starting with 0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061109 4760 scope.go:117] "RemoveContainer" containerID="858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.061275 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": container with ID starting with 858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4 not found: ID does not exist" containerID="858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061295 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} err="failed to get container status \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": rpc error: code = NotFound desc = could not find container \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": container with ID starting with 858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061306 4760 scope.go:117] "RemoveContainer" containerID="bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.061566 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": container with ID starting with bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6 not found: ID does not exist" containerID="bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061586 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} err="failed to get container status \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": rpc error: code = NotFound desc = could not find container \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": container with ID starting with bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061596 4760 scope.go:117] "RemoveContainer" containerID="8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.061781 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": container with ID starting with 8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57 not found: ID does not exist" containerID="8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061798 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} err="failed to get container status \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": rpc error: code = NotFound desc = could not find container \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": container with ID starting with 8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.061812 4760 scope.go:117] "RemoveContainer" containerID="3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.062075 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": container with ID starting with 3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb not found: ID does not exist" containerID="3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062093 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} err="failed to get container status \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": rpc error: code = NotFound desc = could not find container \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": container with ID starting with 3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062104 4760 scope.go:117] "RemoveContainer" containerID="68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2" Nov 25 08:21:32 crc kubenswrapper[4760]: E1125 08:21:32.062291 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": container with ID starting with 68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2 not found: ID does not exist" containerID="68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062313 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} err="failed to get container status \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": rpc error: code = NotFound desc = could not find container \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": container with ID starting with 68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062329 4760 scope.go:117] "RemoveContainer" containerID="7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062486 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} err="failed to get container status \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": rpc error: code = NotFound desc = could not find container \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": container with ID starting with 7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062502 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062652 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} err="failed to get container status \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": rpc error: code = NotFound desc = could not find container \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": container with ID starting with 3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062669 4760 scope.go:117] "RemoveContainer" containerID="da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062882 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} err="failed to get container status \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": rpc error: code = NotFound desc = could not find container \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": container with ID starting with da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.062898 4760 scope.go:117] "RemoveContainer" containerID="890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063081 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} err="failed to get container status \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": rpc error: code = NotFound desc = could not find container \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": container with ID starting with 890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063101 4760 scope.go:117] "RemoveContainer" containerID="0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063497 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} err="failed to get container status \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": rpc error: code = NotFound desc = could not find container \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": container with ID starting with 0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063516 4760 scope.go:117] "RemoveContainer" containerID="858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063684 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} err="failed to get container status \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": rpc error: code = NotFound desc = could not find container \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": container with ID starting with 858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063702 4760 scope.go:117] "RemoveContainer" containerID="bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063831 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} err="failed to get container status \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": rpc error: code = NotFound desc = could not find container \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": container with ID starting with bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.063848 4760 scope.go:117] "RemoveContainer" containerID="8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064036 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} err="failed to get container status \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": rpc error: code = NotFound desc = could not find container \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": container with ID starting with 8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064054 4760 scope.go:117] "RemoveContainer" containerID="3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064177 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} err="failed to get container status \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": rpc error: code = NotFound desc = could not find container \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": container with ID starting with 3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064195 4760 scope.go:117] "RemoveContainer" containerID="68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064402 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} err="failed to get container status \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": rpc error: code = NotFound desc = could not find container \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": container with ID starting with 68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064424 4760 scope.go:117] "RemoveContainer" containerID="7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064576 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} err="failed to get container status \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": rpc error: code = NotFound desc = could not find container \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": container with ID starting with 7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064591 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064741 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} err="failed to get container status \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": rpc error: code = NotFound desc = could not find container \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": container with ID starting with 3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064757 4760 scope.go:117] "RemoveContainer" containerID="da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.064985 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} err="failed to get container status \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": rpc error: code = NotFound desc = could not find container \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": container with ID starting with da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.065003 4760 scope.go:117] "RemoveContainer" containerID="890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.065162 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} err="failed to get container status \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": rpc error: code = NotFound desc = could not find container \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": container with ID starting with 890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.065179 4760 scope.go:117] "RemoveContainer" containerID="0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.065437 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} err="failed to get container status \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": rpc error: code = NotFound desc = could not find container \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": container with ID starting with 0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.065464 4760 scope.go:117] "RemoveContainer" containerID="858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.065753 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} err="failed to get container status \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": rpc error: code = NotFound desc = could not find container \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": container with ID starting with 858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.065796 4760 scope.go:117] "RemoveContainer" containerID="bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066053 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} err="failed to get container status \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": rpc error: code = NotFound desc = could not find container \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": container with ID starting with bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066100 4760 scope.go:117] "RemoveContainer" containerID="8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066336 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} err="failed to get container status \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": rpc error: code = NotFound desc = could not find container \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": container with ID starting with 8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066363 4760 scope.go:117] "RemoveContainer" containerID="3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066538 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} err="failed to get container status \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": rpc error: code = NotFound desc = could not find container \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": container with ID starting with 3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066567 4760 scope.go:117] "RemoveContainer" containerID="68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066764 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} err="failed to get container status \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": rpc error: code = NotFound desc = could not find container \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": container with ID starting with 68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066781 4760 scope.go:117] "RemoveContainer" containerID="7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066941 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af"} err="failed to get container status \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": rpc error: code = NotFound desc = could not find container \"7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af\": container with ID starting with 7c53003a11808a0de17a2b1eca22066ddba0a5ca3c33cde40eb783bf2a6ce4af not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.066957 4760 scope.go:117] "RemoveContainer" containerID="3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067094 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2"} err="failed to get container status \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": rpc error: code = NotFound desc = could not find container \"3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2\": container with ID starting with 3cc6663dfaf43ec4a0638e3970d4e2a4a93c16b8b50668eab9a1cc49901d53e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067112 4760 scope.go:117] "RemoveContainer" containerID="da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067285 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d"} err="failed to get container status \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": rpc error: code = NotFound desc = could not find container \"da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d\": container with ID starting with da8e7077bf106d00994857b675cd48e09debee546fc0a2218cfcaded5660342d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067306 4760 scope.go:117] "RemoveContainer" containerID="890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067656 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d"} err="failed to get container status \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": rpc error: code = NotFound desc = could not find container \"890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d\": container with ID starting with 890be3c328a480a5c3d1f454b60994dbe90c40d63af6998996a4011243d3911d not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067675 4760 scope.go:117] "RemoveContainer" containerID="0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067842 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f"} err="failed to get container status \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": rpc error: code = NotFound desc = could not find container \"0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f\": container with ID starting with 0f0d3e8ea95bd394242b298c92ff994cb97cfd8ee113dc0d73a543314d6eb79f not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.067859 4760 scope.go:117] "RemoveContainer" containerID="858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.068010 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4"} err="failed to get container status \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": rpc error: code = NotFound desc = could not find container \"858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4\": container with ID starting with 858487c0336ab8170ec3fd67fe10f8cbeeb31f6be8cdd48afa6fe19dcf0043a4 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.068026 4760 scope.go:117] "RemoveContainer" containerID="bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.068233 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6"} err="failed to get container status \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": rpc error: code = NotFound desc = could not find container \"bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6\": container with ID starting with bf7e6ee47e0d0657f466be64e92062005381128f7c3f7098c35d8435f775ddf6 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.068279 4760 scope.go:117] "RemoveContainer" containerID="8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.068547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.068956 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57"} err="failed to get container status \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": rpc error: code = NotFound desc = could not find container \"8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57\": container with ID starting with 8a0534a27b6afbbe539f51dbc7210a0958725569717df50d797eba39f7892c57 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.068986 4760 scope.go:117] "RemoveContainer" containerID="3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.069578 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb"} err="failed to get container status \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": rpc error: code = NotFound desc = could not find container \"3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb\": container with ID starting with 3310a508eebc9556bb4b87f6e9ca87c6b730009bf0060337b8fc97e1901cdadb not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.069600 4760 scope.go:117] "RemoveContainer" containerID="68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.069834 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2"} err="failed to get container status \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": rpc error: code = NotFound desc = could not find container \"68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2\": container with ID starting with 68f8d1da8a96f6cd260fc84e41325f1e56e3db60a7e8d5dc7fd3d3978f3435e2 not found: ID does not exist" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.899262 4760 generic.go:334] "Generic (PLEG): container finished" podID="b444b5f2-6f06-41c8-b5bc-a4642c1bc60b" containerID="4ef0e71b6036c74be198d4f2af41b19bfd5653a0a077d1f4a0b2166805be6ba3" exitCode=0 Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.899350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerDied","Data":"4ef0e71b6036c74be198d4f2af41b19bfd5653a0a077d1f4a0b2166805be6ba3"} Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.899377 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"97afc709e29d1e751c6d8629c69bf4792b043efe005dbaa86fc583848d1cd1d1"} Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.906093 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/2.log" Nov 25 08:21:32 crc kubenswrapper[4760]: I1125 08:21:32.946316 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="244c5c71-3110-4dcd-89f3-4dadfc405131" path="/var/lib/kubelet/pods/244c5c71-3110-4dcd-89f3-4dadfc405131/volumes" Nov 25 08:21:33 crc kubenswrapper[4760]: I1125 08:21:33.916998 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"726ffff1cc62b4fc495300c6714b489d81ed5449491463f12056bdd134627b42"} Nov 25 08:21:33 crc kubenswrapper[4760]: I1125 08:21:33.917519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"467c213a68fb94fad798ee840203946f1d66c91b74cb8a490d97f711e9c85f8b"} Nov 25 08:21:33 crc kubenswrapper[4760]: I1125 08:21:33.917531 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"adc1de23fbebecf8bf3c1d22081601be738b306e29a51d66d5fe75bf17bd3e3b"} Nov 25 08:21:33 crc kubenswrapper[4760]: I1125 08:21:33.917539 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"417fe904d915e30ab52bb50e0ab0b1c4a24aed37e4f4cac08e65f79232b12c47"} Nov 25 08:21:33 crc kubenswrapper[4760]: I1125 08:21:33.917546 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"47147cf2fde3c9258ad5af58d8be3702bca99253ee8a691e12f1cd196e0d2fdb"} Nov 25 08:21:33 crc kubenswrapper[4760]: I1125 08:21:33.917554 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"bb58193d9a765db6b6a48db6808bbeaad576fbc6bfca800e8c5dc2d2571e6b8b"} Nov 25 08:21:35 crc kubenswrapper[4760]: I1125 08:21:35.932492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"c98877397d589bf274bd18dd53f51c49f3131673f49a5c264cee3ad2a1e5fcfe"} Nov 25 08:21:38 crc kubenswrapper[4760]: I1125 08:21:38.951180 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" event={"ID":"b444b5f2-6f06-41c8-b5bc-a4642c1bc60b","Type":"ContainerStarted","Data":"85f8d6bb120d071bde1683ace1ed176873b287a3233833196f1f8036deb8493c"} Nov 25 08:21:38 crc kubenswrapper[4760]: I1125 08:21:38.951509 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:38 crc kubenswrapper[4760]: I1125 08:21:38.951521 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:38 crc kubenswrapper[4760]: I1125 08:21:38.951530 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:38 crc kubenswrapper[4760]: I1125 08:21:38.980368 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" podStartSLOduration=7.980349935 podStartE2EDuration="7.980349935s" podCreationTimestamp="2025-11-25 08:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:21:38.975498784 +0000 UTC m=+632.684529589" watchObservedRunningTime="2025-11-25 08:21:38.980349935 +0000 UTC m=+632.689380730" Nov 25 08:21:38 crc kubenswrapper[4760]: I1125 08:21:38.982758 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:38 crc kubenswrapper[4760]: I1125 08:21:38.982837 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:21:45 crc kubenswrapper[4760]: I1125 08:21:45.938495 4760 scope.go:117] "RemoveContainer" containerID="3e9a8382e6791cdaff72ff69f8e4d9f8d43d278f8f44f38094ed07a4d9a31cfd" Nov 25 08:21:45 crc kubenswrapper[4760]: E1125 08:21:45.939210 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-x6n7t_openshift-multus(29261de0-ae0c-4828-afed-e6036aa367cf)\"" pod="openshift-multus/multus-x6n7t" podUID="29261de0-ae0c-4828-afed-e6036aa367cf" Nov 25 08:21:59 crc kubenswrapper[4760]: I1125 08:21:59.938457 4760 scope.go:117] "RemoveContainer" containerID="3e9a8382e6791cdaff72ff69f8e4d9f8d43d278f8f44f38094ed07a4d9a31cfd" Nov 25 08:22:01 crc kubenswrapper[4760]: I1125 08:22:01.063017 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x6n7t_29261de0-ae0c-4828-afed-e6036aa367cf/kube-multus/2.log" Nov 25 08:22:01 crc kubenswrapper[4760]: I1125 08:22:01.063624 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x6n7t" event={"ID":"29261de0-ae0c-4828-afed-e6036aa367cf","Type":"ContainerStarted","Data":"9458f5b088949409e9e1f270b2357dff0a52c1d6950c3ea298b428a2d931da35"} Nov 25 08:22:01 crc kubenswrapper[4760]: I1125 08:22:01.746615 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:22:01 crc kubenswrapper[4760]: I1125 08:22:01.746738 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:22:01 crc kubenswrapper[4760]: I1125 08:22:01.746821 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:22:01 crc kubenswrapper[4760]: I1125 08:22:01.747811 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ea91d6699ab5d174bc8311b29a2b59a97368218bd86cb03b23aecea38616074"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:22:01 crc kubenswrapper[4760]: I1125 08:22:01.747904 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://8ea91d6699ab5d174bc8311b29a2b59a97368218bd86cb03b23aecea38616074" gracePeriod=600 Nov 25 08:22:02 crc kubenswrapper[4760]: I1125 08:22:02.072901 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="8ea91d6699ab5d174bc8311b29a2b59a97368218bd86cb03b23aecea38616074" exitCode=0 Nov 25 08:22:02 crc kubenswrapper[4760]: I1125 08:22:02.072954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"8ea91d6699ab5d174bc8311b29a2b59a97368218bd86cb03b23aecea38616074"} Nov 25 08:22:02 crc kubenswrapper[4760]: I1125 08:22:02.073289 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"1b1cf405379b8f080f8ca00a8aea4c263e37ea8900c6a162c41370800ee44d84"} Nov 25 08:22:02 crc kubenswrapper[4760]: I1125 08:22:02.073312 4760 scope.go:117] "RemoveContainer" containerID="308caa4dc448cb9739c473fcfee251cdc29a87eaebc0beb1e3567269bf4c7aa2" Nov 25 08:22:02 crc kubenswrapper[4760]: I1125 08:22:02.094451 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nwh7d" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.214912 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8"] Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.216467 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.217847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.226888 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.226959 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dwq\" (UniqueName: \"kubernetes.io/projected/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-kube-api-access-t5dwq\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.227058 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.227650 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8"] Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.328130 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.328212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.328280 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5dwq\" (UniqueName: \"kubernetes.io/projected/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-kube-api-access-t5dwq\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.328690 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.328857 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.346631 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5dwq\" (UniqueName: \"kubernetes.io/projected/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-kube-api-access-t5dwq\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.531177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:14 crc kubenswrapper[4760]: I1125 08:22:14.713797 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8"] Nov 25 08:22:15 crc kubenswrapper[4760]: I1125 08:22:15.140372 4760 generic.go:334] "Generic (PLEG): container finished" podID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerID="b906d5011feadf751b17e8b978af9d2d9ba872f1f3d3dd67f1eb08ace6c172da" exitCode=0 Nov 25 08:22:15 crc kubenswrapper[4760]: I1125 08:22:15.140418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" event={"ID":"0554a1c9-798a-47ca-a9c3-7b57e649ddeb","Type":"ContainerDied","Data":"b906d5011feadf751b17e8b978af9d2d9ba872f1f3d3dd67f1eb08ace6c172da"} Nov 25 08:22:15 crc kubenswrapper[4760]: I1125 08:22:15.140447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" event={"ID":"0554a1c9-798a-47ca-a9c3-7b57e649ddeb","Type":"ContainerStarted","Data":"7ea73868eadb9d0ad5d70bdaf4084d379c4cdd38589aa2b6df9dc731e3334499"} Nov 25 08:22:17 crc kubenswrapper[4760]: I1125 08:22:17.150771 4760 generic.go:334] "Generic (PLEG): container finished" podID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerID="9d71f4eb616077056951368602f968d38807acb1ac842cdf1784b5fb7b48e6f5" exitCode=0 Nov 25 08:22:17 crc kubenswrapper[4760]: I1125 08:22:17.150870 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" event={"ID":"0554a1c9-798a-47ca-a9c3-7b57e649ddeb","Type":"ContainerDied","Data":"9d71f4eb616077056951368602f968d38807acb1ac842cdf1784b5fb7b48e6f5"} Nov 25 08:22:18 crc kubenswrapper[4760]: I1125 08:22:18.158029 4760 generic.go:334] "Generic (PLEG): container finished" podID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerID="68f5dea6c9bf7430e51f2a0bd0e105b59c582684b1515493bec902a51b0bf882" exitCode=0 Nov 25 08:22:18 crc kubenswrapper[4760]: I1125 08:22:18.158087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" event={"ID":"0554a1c9-798a-47ca-a9c3-7b57e649ddeb","Type":"ContainerDied","Data":"68f5dea6c9bf7430e51f2a0bd0e105b59c582684b1515493bec902a51b0bf882"} Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.444550 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.582887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5dwq\" (UniqueName: \"kubernetes.io/projected/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-kube-api-access-t5dwq\") pod \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.582936 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-bundle\") pod \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.583020 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-util\") pod \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\" (UID: \"0554a1c9-798a-47ca-a9c3-7b57e649ddeb\") " Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.584234 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-bundle" (OuterVolumeSpecName: "bundle") pod "0554a1c9-798a-47ca-a9c3-7b57e649ddeb" (UID: "0554a1c9-798a-47ca-a9c3-7b57e649ddeb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.587851 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-kube-api-access-t5dwq" (OuterVolumeSpecName: "kube-api-access-t5dwq") pod "0554a1c9-798a-47ca-a9c3-7b57e649ddeb" (UID: "0554a1c9-798a-47ca-a9c3-7b57e649ddeb"). InnerVolumeSpecName "kube-api-access-t5dwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.596307 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-util" (OuterVolumeSpecName: "util") pod "0554a1c9-798a-47ca-a9c3-7b57e649ddeb" (UID: "0554a1c9-798a-47ca-a9c3-7b57e649ddeb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.683984 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-util\") on node \"crc\" DevicePath \"\"" Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.684018 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5dwq\" (UniqueName: \"kubernetes.io/projected/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-kube-api-access-t5dwq\") on node \"crc\" DevicePath \"\"" Nov 25 08:22:19 crc kubenswrapper[4760]: I1125 08:22:19.684076 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0554a1c9-798a-47ca-a9c3-7b57e649ddeb-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:22:20 crc kubenswrapper[4760]: I1125 08:22:20.170551 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" event={"ID":"0554a1c9-798a-47ca-a9c3-7b57e649ddeb","Type":"ContainerDied","Data":"7ea73868eadb9d0ad5d70bdaf4084d379c4cdd38589aa2b6df9dc731e3334499"} Nov 25 08:22:20 crc kubenswrapper[4760]: I1125 08:22:20.170594 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ea73868eadb9d0ad5d70bdaf4084d379c4cdd38589aa2b6df9dc731e3334499" Nov 25 08:22:20 crc kubenswrapper[4760]: I1125 08:22:20.170716 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.845374 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-cjvcc"] Nov 25 08:22:21 crc kubenswrapper[4760]: E1125 08:22:21.845574 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerName="extract" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.845586 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerName="extract" Nov 25 08:22:21 crc kubenswrapper[4760]: E1125 08:22:21.845601 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerName="pull" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.845607 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerName="pull" Nov 25 08:22:21 crc kubenswrapper[4760]: E1125 08:22:21.845618 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerName="util" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.845624 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerName="util" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.845723 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0554a1c9-798a-47ca-a9c3-7b57e649ddeb" containerName="extract" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.846062 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.851753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-m8sxg" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.852162 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.852215 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 08:22:21 crc kubenswrapper[4760]: I1125 08:22:21.867651 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-cjvcc"] Nov 25 08:22:22 crc kubenswrapper[4760]: I1125 08:22:22.010271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7699\" (UniqueName: \"kubernetes.io/projected/08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff-kube-api-access-l7699\") pod \"nmstate-operator-557fdffb88-cjvcc\" (UID: \"08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" Nov 25 08:22:22 crc kubenswrapper[4760]: I1125 08:22:22.111445 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7699\" (UniqueName: \"kubernetes.io/projected/08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff-kube-api-access-l7699\") pod \"nmstate-operator-557fdffb88-cjvcc\" (UID: \"08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" Nov 25 08:22:22 crc kubenswrapper[4760]: I1125 08:22:22.142452 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7699\" (UniqueName: \"kubernetes.io/projected/08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff-kube-api-access-l7699\") pod \"nmstate-operator-557fdffb88-cjvcc\" (UID: \"08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" Nov 25 08:22:22 crc kubenswrapper[4760]: I1125 08:22:22.159127 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" Nov 25 08:22:22 crc kubenswrapper[4760]: I1125 08:22:22.326513 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-cjvcc"] Nov 25 08:22:22 crc kubenswrapper[4760]: W1125 08:22:22.340270 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08faa7c7_5fae_4dc8_9eb8_a83a6f7055ff.slice/crio-d48c2a826f71f61fdb992a1f8d90b408e74308d25b05caa84365b0a2a7261313 WatchSource:0}: Error finding container d48c2a826f71f61fdb992a1f8d90b408e74308d25b05caa84365b0a2a7261313: Status 404 returned error can't find the container with id d48c2a826f71f61fdb992a1f8d90b408e74308d25b05caa84365b0a2a7261313 Nov 25 08:22:23 crc kubenswrapper[4760]: I1125 08:22:23.188289 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" event={"ID":"08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff","Type":"ContainerStarted","Data":"d48c2a826f71f61fdb992a1f8d90b408e74308d25b05caa84365b0a2a7261313"} Nov 25 08:22:25 crc kubenswrapper[4760]: I1125 08:22:25.199871 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" event={"ID":"08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff","Type":"ContainerStarted","Data":"d50e47629f6e0f5921640c7d89ce6e77aa2386cc4950aa994adb186e41b354d4"} Nov 25 08:22:25 crc kubenswrapper[4760]: I1125 08:22:25.221758 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-cjvcc" podStartSLOduration=2.316772413 podStartE2EDuration="4.221740614s" podCreationTimestamp="2025-11-25 08:22:21 +0000 UTC" firstStartedPulling="2025-11-25 08:22:22.343688476 +0000 UTC m=+676.052719271" lastFinishedPulling="2025-11-25 08:22:24.248656677 +0000 UTC m=+677.957687472" observedRunningTime="2025-11-25 08:22:25.221294722 +0000 UTC m=+678.930325527" watchObservedRunningTime="2025-11-25 08:22:25.221740614 +0000 UTC m=+678.930771409" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.132020 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.133822 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.137758 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-n6sq6" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.150689 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.151800 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.154803 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.156361 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.164987 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs56b\" (UniqueName: \"kubernetes.io/projected/a7203aa8-a498-4242-9c79-3bcfb384707e-kube-api-access-qs56b\") pod \"nmstate-metrics-5dcf9c57c5-c27qr\" (UID: \"a7203aa8-a498-4242-9c79-3bcfb384707e\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.165066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vgs\" (UniqueName: \"kubernetes.io/projected/133b40ac-61d0-4821-813d-a3f722f95293-kube-api-access-c6vgs\") pod \"nmstate-webhook-6b89b748d8-p7b9n\" (UID: \"133b40ac-61d0-4821-813d-a3f722f95293\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.165095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/133b40ac-61d0-4821-813d-a3f722f95293-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-p7b9n\" (UID: \"133b40ac-61d0-4821-813d-a3f722f95293\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.167269 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.200959 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ld6xj"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.201771 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.265740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vgs\" (UniqueName: \"kubernetes.io/projected/133b40ac-61d0-4821-813d-a3f722f95293-kube-api-access-c6vgs\") pod \"nmstate-webhook-6b89b748d8-p7b9n\" (UID: \"133b40ac-61d0-4821-813d-a3f722f95293\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.265781 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/133b40ac-61d0-4821-813d-a3f722f95293-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-p7b9n\" (UID: \"133b40ac-61d0-4821-813d-a3f722f95293\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.265811 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-nmstate-lock\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.265837 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-dbus-socket\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.265854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mdn9\" (UniqueName: \"kubernetes.io/projected/adb17860-3ba6-4771-88db-d63cebf97628-kube-api-access-6mdn9\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.265898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs56b\" (UniqueName: \"kubernetes.io/projected/a7203aa8-a498-4242-9c79-3bcfb384707e-kube-api-access-qs56b\") pod \"nmstate-metrics-5dcf9c57c5-c27qr\" (UID: \"a7203aa8-a498-4242-9c79-3bcfb384707e\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.265925 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-ovs-socket\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: E1125 08:22:26.266232 4760 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 25 08:22:26 crc kubenswrapper[4760]: E1125 08:22:26.266292 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/133b40ac-61d0-4821-813d-a3f722f95293-tls-key-pair podName:133b40ac-61d0-4821-813d-a3f722f95293 nodeName:}" failed. No retries permitted until 2025-11-25 08:22:26.76627432 +0000 UTC m=+680.475305115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/133b40ac-61d0-4821-813d-a3f722f95293-tls-key-pair") pod "nmstate-webhook-6b89b748d8-p7b9n" (UID: "133b40ac-61d0-4821-813d-a3f722f95293") : secret "openshift-nmstate-webhook" not found Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.287370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs56b\" (UniqueName: \"kubernetes.io/projected/a7203aa8-a498-4242-9c79-3bcfb384707e-kube-api-access-qs56b\") pod \"nmstate-metrics-5dcf9c57c5-c27qr\" (UID: \"a7203aa8-a498-4242-9c79-3bcfb384707e\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.290889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vgs\" (UniqueName: \"kubernetes.io/projected/133b40ac-61d0-4821-813d-a3f722f95293-kube-api-access-c6vgs\") pod \"nmstate-webhook-6b89b748d8-p7b9n\" (UID: \"133b40ac-61d0-4821-813d-a3f722f95293\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.312979 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.313615 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.315624 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.315956 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vkrgb" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.315959 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.326277 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-nmstate-lock\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367304 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367331 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-nmstate-lock\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367346 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-dbus-socket\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367398 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mdn9\" (UniqueName: \"kubernetes.io/projected/adb17860-3ba6-4771-88db-d63cebf97628-kube-api-access-6mdn9\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-dbus-socket\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367653 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9bg\" (UniqueName: \"kubernetes.io/projected/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-kube-api-access-cl9bg\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.367964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-ovs-socket\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.368030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/adb17860-3ba6-4771-88db-d63cebf97628-ovs-socket\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.384827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mdn9\" (UniqueName: \"kubernetes.io/projected/adb17860-3ba6-4771-88db-d63cebf97628-kube-api-access-6mdn9\") pod \"nmstate-handler-ld6xj\" (UID: \"adb17860-3ba6-4771-88db-d63cebf97628\") " pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.451328 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.468678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.468729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9bg\" (UniqueName: \"kubernetes.io/projected/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-kube-api-access-cl9bg\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.468798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.469600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.473840 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.485847 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f49d7b5fb-46nzv"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.486637 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.494015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9bg\" (UniqueName: \"kubernetes.io/projected/9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b-kube-api-access-cl9bg\") pod \"nmstate-console-plugin-5874bd7bc5-cj4rl\" (UID: \"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.521560 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.538301 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f49d7b5fb-46nzv"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.569485 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-service-ca\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.569527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5420a2da-5073-43e8-9c4c-5de72316163e-console-serving-cert\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.569562 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsfjz\" (UniqueName: \"kubernetes.io/projected/5420a2da-5073-43e8-9c4c-5de72316163e-kube-api-access-vsfjz\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.569600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-trusted-ca-bundle\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.569672 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5420a2da-5073-43e8-9c4c-5de72316163e-console-oauth-config\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.569693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-oauth-serving-cert\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.569729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-console-config\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.634792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.670527 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-service-ca\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.670566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5420a2da-5073-43e8-9c4c-5de72316163e-console-serving-cert\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.670595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsfjz\" (UniqueName: \"kubernetes.io/projected/5420a2da-5073-43e8-9c4c-5de72316163e-kube-api-access-vsfjz\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.670623 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-trusted-ca-bundle\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.670649 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5420a2da-5073-43e8-9c4c-5de72316163e-console-oauth-config\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.670662 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-oauth-serving-cert\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.670689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-console-config\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.671413 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-console-config\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.671951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-service-ca\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.675637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-trusted-ca-bundle\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.676618 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5420a2da-5073-43e8-9c4c-5de72316163e-oauth-serving-cert\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.679773 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5420a2da-5073-43e8-9c4c-5de72316163e-console-oauth-config\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.685613 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.685869 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5420a2da-5073-43e8-9c4c-5de72316163e-console-serving-cert\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.689574 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsfjz\" (UniqueName: \"kubernetes.io/projected/5420a2da-5073-43e8-9c4c-5de72316163e-kube-api-access-vsfjz\") pod \"console-f49d7b5fb-46nzv\" (UID: \"5420a2da-5073-43e8-9c4c-5de72316163e\") " pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.771841 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/133b40ac-61d0-4821-813d-a3f722f95293-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-p7b9n\" (UID: \"133b40ac-61d0-4821-813d-a3f722f95293\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.778555 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/133b40ac-61d0-4821-813d-a3f722f95293-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-p7b9n\" (UID: \"133b40ac-61d0-4821-813d-a3f722f95293\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.818014 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl"] Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.822493 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:26 crc kubenswrapper[4760]: I1125 08:22:26.996374 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f49d7b5fb-46nzv"] Nov 25 08:22:27 crc kubenswrapper[4760]: W1125 08:22:27.006206 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5420a2da_5073_43e8_9c4c_5de72316163e.slice/crio-c981fc58f7ec7bf630b78c102c318640a850c19c89dea2ba1d25602ca7f1e419 WatchSource:0}: Error finding container c981fc58f7ec7bf630b78c102c318640a850c19c89dea2ba1d25602ca7f1e419: Status 404 returned error can't find the container with id c981fc58f7ec7bf630b78c102c318640a850c19c89dea2ba1d25602ca7f1e419 Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.067286 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.211577 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" event={"ID":"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b","Type":"ContainerStarted","Data":"f6c47d1ac8a55d6c4b80823bd3fbf57f5154ee38c8b479f1fbdff513e72f0bee"} Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.212470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" event={"ID":"a7203aa8-a498-4242-9c79-3bcfb384707e","Type":"ContainerStarted","Data":"91929664186dd627ab7945af0e738fa5ff6373ea41e1ba648f767a1eaeb5b93b"} Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.213344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ld6xj" event={"ID":"adb17860-3ba6-4771-88db-d63cebf97628","Type":"ContainerStarted","Data":"f60e9b18dda82959aed81e7630a619170c407c15ae2db7c93fece886e89539f3"} Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.214580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f49d7b5fb-46nzv" event={"ID":"5420a2da-5073-43e8-9c4c-5de72316163e","Type":"ContainerStarted","Data":"89e2f2a5beb67926c8f96e4d714721bf452729f98aad28f1c9e1660dc16ee8ef"} Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.214614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f49d7b5fb-46nzv" event={"ID":"5420a2da-5073-43e8-9c4c-5de72316163e","Type":"ContainerStarted","Data":"c981fc58f7ec7bf630b78c102c318640a850c19c89dea2ba1d25602ca7f1e419"} Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.238198 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f49d7b5fb-46nzv" podStartSLOduration=1.2381792329999999 podStartE2EDuration="1.238179233s" podCreationTimestamp="2025-11-25 08:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:22:27.235743992 +0000 UTC m=+680.944774807" watchObservedRunningTime="2025-11-25 08:22:27.238179233 +0000 UTC m=+680.947210028" Nov 25 08:22:27 crc kubenswrapper[4760]: I1125 08:22:27.257594 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n"] Nov 25 08:22:27 crc kubenswrapper[4760]: W1125 08:22:27.267150 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133b40ac_61d0_4821_813d_a3f722f95293.slice/crio-39f85eade6b497fb14ad3a54626ac21590afca2fdb346e7978fd5a9b74235eda WatchSource:0}: Error finding container 39f85eade6b497fb14ad3a54626ac21590afca2fdb346e7978fd5a9b74235eda: Status 404 returned error can't find the container with id 39f85eade6b497fb14ad3a54626ac21590afca2fdb346e7978fd5a9b74235eda Nov 25 08:22:28 crc kubenswrapper[4760]: I1125 08:22:28.221924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" event={"ID":"133b40ac-61d0-4821-813d-a3f722f95293","Type":"ContainerStarted","Data":"39f85eade6b497fb14ad3a54626ac21590afca2fdb346e7978fd5a9b74235eda"} Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.246218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" event={"ID":"a7203aa8-a498-4242-9c79-3bcfb384707e","Type":"ContainerStarted","Data":"96b0606a30a2d0fb3bc446b7cb4fcdf389fea066020e08998c58956da741a341"} Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.248027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ld6xj" event={"ID":"adb17860-3ba6-4771-88db-d63cebf97628","Type":"ContainerStarted","Data":"0e5836ebad21b69753ef8c4325ebd49365fbd9fb792485122525a947e68e124c"} Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.248306 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.249834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" event={"ID":"133b40ac-61d0-4821-813d-a3f722f95293","Type":"ContainerStarted","Data":"a01561f12cdcf438c7f794dfd296ed2fe7b8c1785f7c40aa445f32955fb43842"} Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.250093 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.251218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" event={"ID":"9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b","Type":"ContainerStarted","Data":"a3a25b3e07c709a566c3031cdbadc1717c0167cb7150ca55e8cc8fa159974bed"} Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.271801 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ld6xj" podStartSLOduration=1.32065346 podStartE2EDuration="4.271704092s" podCreationTimestamp="2025-11-25 08:22:26 +0000 UTC" firstStartedPulling="2025-11-25 08:22:26.570808882 +0000 UTC m=+680.279839687" lastFinishedPulling="2025-11-25 08:22:29.521859524 +0000 UTC m=+683.230890319" observedRunningTime="2025-11-25 08:22:30.265602336 +0000 UTC m=+683.974633131" watchObservedRunningTime="2025-11-25 08:22:30.271704092 +0000 UTC m=+683.980734927" Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.299272 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-cj4rl" podStartSLOduration=1.637336372 podStartE2EDuration="4.299241209s" podCreationTimestamp="2025-11-25 08:22:26 +0000 UTC" firstStartedPulling="2025-11-25 08:22:26.833185053 +0000 UTC m=+680.542215838" lastFinishedPulling="2025-11-25 08:22:29.49508987 +0000 UTC m=+683.204120675" observedRunningTime="2025-11-25 08:22:30.292232956 +0000 UTC m=+684.001263781" watchObservedRunningTime="2025-11-25 08:22:30.299241209 +0000 UTC m=+684.008271994" Nov 25 08:22:30 crc kubenswrapper[4760]: I1125 08:22:30.320178 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" podStartSLOduration=2.067987214 podStartE2EDuration="4.320160214s" podCreationTimestamp="2025-11-25 08:22:26 +0000 UTC" firstStartedPulling="2025-11-25 08:22:27.269743796 +0000 UTC m=+680.978774591" lastFinishedPulling="2025-11-25 08:22:29.521916786 +0000 UTC m=+683.230947591" observedRunningTime="2025-11-25 08:22:30.312220145 +0000 UTC m=+684.021250940" watchObservedRunningTime="2025-11-25 08:22:30.320160214 +0000 UTC m=+684.029191009" Nov 25 08:22:32 crc kubenswrapper[4760]: I1125 08:22:32.260814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" event={"ID":"a7203aa8-a498-4242-9c79-3bcfb384707e","Type":"ContainerStarted","Data":"dd544f25ee8067f415d24437dd9f28c48841e632459d784ed4e42cccd4caac96"} Nov 25 08:22:32 crc kubenswrapper[4760]: I1125 08:22:32.277478 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-c27qr" podStartSLOduration=0.917479394 podStartE2EDuration="6.277463151s" podCreationTimestamp="2025-11-25 08:22:26 +0000 UTC" firstStartedPulling="2025-11-25 08:22:26.694610545 +0000 UTC m=+680.403641340" lastFinishedPulling="2025-11-25 08:22:32.054594302 +0000 UTC m=+685.763625097" observedRunningTime="2025-11-25 08:22:32.274930898 +0000 UTC m=+685.983961693" watchObservedRunningTime="2025-11-25 08:22:32.277463151 +0000 UTC m=+685.986493946" Nov 25 08:22:36 crc kubenswrapper[4760]: I1125 08:22:36.547619 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ld6xj" Nov 25 08:22:36 crc kubenswrapper[4760]: I1125 08:22:36.823227 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:36 crc kubenswrapper[4760]: I1125 08:22:36.823314 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:36 crc kubenswrapper[4760]: I1125 08:22:36.829142 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:37 crc kubenswrapper[4760]: I1125 08:22:37.291458 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f49d7b5fb-46nzv" Nov 25 08:22:37 crc kubenswrapper[4760]: I1125 08:22:37.342169 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-s4qrl"] Nov 25 08:22:47 crc kubenswrapper[4760]: I1125 08:22:47.072946 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p7b9n" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.520899 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w"] Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.522563 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.525605 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.532624 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w"] Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.665652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.665945 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.666082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26tr\" (UniqueName: \"kubernetes.io/projected/2230ed24-958d-42e6-8c36-87e8b4cede69-kube-api-access-g26tr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.767929 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g26tr\" (UniqueName: \"kubernetes.io/projected/2230ed24-958d-42e6-8c36-87e8b4cede69-kube-api-access-g26tr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.768022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.768071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.768533 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.768594 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.787360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26tr\" (UniqueName: \"kubernetes.io/projected/2230ed24-958d-42e6-8c36-87e8b4cede69-kube-api-access-g26tr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:58 crc kubenswrapper[4760]: I1125 08:22:58.837490 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:22:59 crc kubenswrapper[4760]: I1125 08:22:59.280373 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w"] Nov 25 08:22:59 crc kubenswrapper[4760]: I1125 08:22:59.416069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" event={"ID":"2230ed24-958d-42e6-8c36-87e8b4cede69","Type":"ContainerStarted","Data":"65bb9259b5e65a5346b52656f9739506bee51c9b60bda538d92303edf0992a8f"} Nov 25 08:23:00 crc kubenswrapper[4760]: I1125 08:23:00.424034 4760 generic.go:334] "Generic (PLEG): container finished" podID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerID="859e4895e9ada91bea98c8303f499b5002c254a14e23de64e5d6487f34bddc45" exitCode=0 Nov 25 08:23:00 crc kubenswrapper[4760]: I1125 08:23:00.424364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" event={"ID":"2230ed24-958d-42e6-8c36-87e8b4cede69","Type":"ContainerDied","Data":"859e4895e9ada91bea98c8303f499b5002c254a14e23de64e5d6487f34bddc45"} Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.380862 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-s4qrl" podUID="916b7590-b541-4ca9-b432-861731b7ae94" containerName="console" containerID="cri-o://4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a" gracePeriod=15 Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.438385 4760 generic.go:334] "Generic (PLEG): container finished" podID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerID="8891f1b49e58979161086aeb37abbdbd416d10b161671efa057e2838f45d28d1" exitCode=0 Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.438436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" event={"ID":"2230ed24-958d-42e6-8c36-87e8b4cede69","Type":"ContainerDied","Data":"8891f1b49e58979161086aeb37abbdbd416d10b161671efa057e2838f45d28d1"} Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.813140 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-s4qrl_916b7590-b541-4ca9-b432-861731b7ae94/console/0.log" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.813392 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.924269 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-service-ca\") pod \"916b7590-b541-4ca9-b432-861731b7ae94\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.924348 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-trusted-ca-bundle\") pod \"916b7590-b541-4ca9-b432-861731b7ae94\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.924413 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-oauth-serving-cert\") pod \"916b7590-b541-4ca9-b432-861731b7ae94\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.924437 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwgh6\" (UniqueName: \"kubernetes.io/projected/916b7590-b541-4ca9-b432-861731b7ae94-kube-api-access-bwgh6\") pod \"916b7590-b541-4ca9-b432-861731b7ae94\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.924468 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-serving-cert\") pod \"916b7590-b541-4ca9-b432-861731b7ae94\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.924495 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-console-config\") pod \"916b7590-b541-4ca9-b432-861731b7ae94\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.924521 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-oauth-config\") pod \"916b7590-b541-4ca9-b432-861731b7ae94\" (UID: \"916b7590-b541-4ca9-b432-861731b7ae94\") " Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.925212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "916b7590-b541-4ca9-b432-861731b7ae94" (UID: "916b7590-b541-4ca9-b432-861731b7ae94"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.925204 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-console-config" (OuterVolumeSpecName: "console-config") pod "916b7590-b541-4ca9-b432-861731b7ae94" (UID: "916b7590-b541-4ca9-b432-861731b7ae94"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.925417 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-service-ca" (OuterVolumeSpecName: "service-ca") pod "916b7590-b541-4ca9-b432-861731b7ae94" (UID: "916b7590-b541-4ca9-b432-861731b7ae94"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.925738 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "916b7590-b541-4ca9-b432-861731b7ae94" (UID: "916b7590-b541-4ca9-b432-861731b7ae94"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.929917 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "916b7590-b541-4ca9-b432-861731b7ae94" (UID: "916b7590-b541-4ca9-b432-861731b7ae94"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.929938 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916b7590-b541-4ca9-b432-861731b7ae94-kube-api-access-bwgh6" (OuterVolumeSpecName: "kube-api-access-bwgh6") pod "916b7590-b541-4ca9-b432-861731b7ae94" (UID: "916b7590-b541-4ca9-b432-861731b7ae94"). InnerVolumeSpecName "kube-api-access-bwgh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:23:02 crc kubenswrapper[4760]: I1125 08:23:02.930129 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "916b7590-b541-4ca9-b432-861731b7ae94" (UID: "916b7590-b541-4ca9-b432-861731b7ae94"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.025911 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.025943 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.025956 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwgh6\" (UniqueName: \"kubernetes.io/projected/916b7590-b541-4ca9-b432-861731b7ae94-kube-api-access-bwgh6\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.025969 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.025980 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.025992 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/916b7590-b541-4ca9-b432-861731b7ae94-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.026004 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/916b7590-b541-4ca9-b432-861731b7ae94-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.450433 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-s4qrl_916b7590-b541-4ca9-b432-861731b7ae94/console/0.log" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.450478 4760 generic.go:334] "Generic (PLEG): container finished" podID="916b7590-b541-4ca9-b432-861731b7ae94" containerID="4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a" exitCode=2 Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.450537 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-s4qrl" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.450566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s4qrl" event={"ID":"916b7590-b541-4ca9-b432-861731b7ae94","Type":"ContainerDied","Data":"4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a"} Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.450640 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-s4qrl" event={"ID":"916b7590-b541-4ca9-b432-861731b7ae94","Type":"ContainerDied","Data":"5a270b6a4a5cc04ff580798c3b7503db16c8ab4f2644fdf93145bdc89c25a1df"} Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.450674 4760 scope.go:117] "RemoveContainer" containerID="4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.454216 4760 generic.go:334] "Generic (PLEG): container finished" podID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerID="d81ce7abddea803c6a7b7c8e0b8f646e5b70eb4cc90e66508b1fd474f7c326e3" exitCode=0 Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.454356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" event={"ID":"2230ed24-958d-42e6-8c36-87e8b4cede69","Type":"ContainerDied","Data":"d81ce7abddea803c6a7b7c8e0b8f646e5b70eb4cc90e66508b1fd474f7c326e3"} Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.475146 4760 scope.go:117] "RemoveContainer" containerID="4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a" Nov 25 08:23:03 crc kubenswrapper[4760]: E1125 08:23:03.476130 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a\": container with ID starting with 4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a not found: ID does not exist" containerID="4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.476177 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a"} err="failed to get container status \"4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a\": rpc error: code = NotFound desc = could not find container \"4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a\": container with ID starting with 4662f93551c12f2adf69a930128db7c04be35f83db6edb5c825c37a6d5542d5a not found: ID does not exist" Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.499603 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-s4qrl"] Nov 25 08:23:03 crc kubenswrapper[4760]: I1125 08:23:03.502687 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-s4qrl"] Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.712927 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.848604 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g26tr\" (UniqueName: \"kubernetes.io/projected/2230ed24-958d-42e6-8c36-87e8b4cede69-kube-api-access-g26tr\") pod \"2230ed24-958d-42e6-8c36-87e8b4cede69\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.848692 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-bundle\") pod \"2230ed24-958d-42e6-8c36-87e8b4cede69\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.848832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-util\") pod \"2230ed24-958d-42e6-8c36-87e8b4cede69\" (UID: \"2230ed24-958d-42e6-8c36-87e8b4cede69\") " Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.849983 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-bundle" (OuterVolumeSpecName: "bundle") pod "2230ed24-958d-42e6-8c36-87e8b4cede69" (UID: "2230ed24-958d-42e6-8c36-87e8b4cede69"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.850277 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.853206 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2230ed24-958d-42e6-8c36-87e8b4cede69-kube-api-access-g26tr" (OuterVolumeSpecName: "kube-api-access-g26tr") pod "2230ed24-958d-42e6-8c36-87e8b4cede69" (UID: "2230ed24-958d-42e6-8c36-87e8b4cede69"). InnerVolumeSpecName "kube-api-access-g26tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.945633 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916b7590-b541-4ca9-b432-861731b7ae94" path="/var/lib/kubelet/pods/916b7590-b541-4ca9-b432-861731b7ae94/volumes" Nov 25 08:23:04 crc kubenswrapper[4760]: I1125 08:23:04.951754 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g26tr\" (UniqueName: \"kubernetes.io/projected/2230ed24-958d-42e6-8c36-87e8b4cede69-kube-api-access-g26tr\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:05 crc kubenswrapper[4760]: I1125 08:23:05.139384 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-util" (OuterVolumeSpecName: "util") pod "2230ed24-958d-42e6-8c36-87e8b4cede69" (UID: "2230ed24-958d-42e6-8c36-87e8b4cede69"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:23:05 crc kubenswrapper[4760]: I1125 08:23:05.154161 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2230ed24-958d-42e6-8c36-87e8b4cede69-util\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:05 crc kubenswrapper[4760]: I1125 08:23:05.467758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" event={"ID":"2230ed24-958d-42e6-8c36-87e8b4cede69","Type":"ContainerDied","Data":"65bb9259b5e65a5346b52656f9739506bee51c9b60bda538d92303edf0992a8f"} Nov 25 08:23:05 crc kubenswrapper[4760]: I1125 08:23:05.467802 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bb9259b5e65a5346b52656f9739506bee51c9b60bda538d92303edf0992a8f" Nov 25 08:23:05 crc kubenswrapper[4760]: I1125 08:23:05.467812 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.510229 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64"] Nov 25 08:23:13 crc kubenswrapper[4760]: E1125 08:23:13.510891 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerName="util" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.510902 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerName="util" Nov 25 08:23:13 crc kubenswrapper[4760]: E1125 08:23:13.510913 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916b7590-b541-4ca9-b432-861731b7ae94" containerName="console" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.510919 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="916b7590-b541-4ca9-b432-861731b7ae94" containerName="console" Nov 25 08:23:13 crc kubenswrapper[4760]: E1125 08:23:13.510933 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerName="pull" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.510940 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerName="pull" Nov 25 08:23:13 crc kubenswrapper[4760]: E1125 08:23:13.510954 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerName="extract" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.510960 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerName="extract" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.511051 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="916b7590-b541-4ca9-b432-861731b7ae94" containerName="console" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.511059 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2230ed24-958d-42e6-8c36-87e8b4cede69" containerName="extract" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.511464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.514266 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.514373 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.515582 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.515628 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.517699 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tzfgb" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.533743 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64"] Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.704630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/394da4a0-f1c0-45c3-a31b-9cace1180c53-webhook-cert\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.704714 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qdjg\" (UniqueName: \"kubernetes.io/projected/394da4a0-f1c0-45c3-a31b-9cace1180c53-kube-api-access-6qdjg\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.704787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/394da4a0-f1c0-45c3-a31b-9cace1180c53-apiservice-cert\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.754756 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-547776db9-454dl"] Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.755635 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.758395 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.758475 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.758857 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n2mfk" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.777300 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-547776db9-454dl"] Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.805931 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/394da4a0-f1c0-45c3-a31b-9cace1180c53-webhook-cert\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.805977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qdjg\" (UniqueName: \"kubernetes.io/projected/394da4a0-f1c0-45c3-a31b-9cace1180c53-kube-api-access-6qdjg\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.806382 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/394da4a0-f1c0-45c3-a31b-9cace1180c53-apiservice-cert\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.813010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/394da4a0-f1c0-45c3-a31b-9cace1180c53-apiservice-cert\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.813678 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/394da4a0-f1c0-45c3-a31b-9cace1180c53-webhook-cert\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.829240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qdjg\" (UniqueName: \"kubernetes.io/projected/394da4a0-f1c0-45c3-a31b-9cace1180c53-kube-api-access-6qdjg\") pod \"metallb-operator-controller-manager-76784bbdf-m7z64\" (UID: \"394da4a0-f1c0-45c3-a31b-9cace1180c53\") " pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.830811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.907654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-webhook-cert\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.907714 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9ls\" (UniqueName: \"kubernetes.io/projected/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-kube-api-access-4t9ls\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:13 crc kubenswrapper[4760]: I1125 08:23:13.907741 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-apiservice-cert\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.008886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t9ls\" (UniqueName: \"kubernetes.io/projected/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-kube-api-access-4t9ls\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.009295 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-apiservice-cert\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.009384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-webhook-cert\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.016080 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-apiservice-cert\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.017027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-webhook-cert\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.030349 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t9ls\" (UniqueName: \"kubernetes.io/projected/0f1ca361-a3c2-45c2-86ef-a32c06fe6476-kube-api-access-4t9ls\") pod \"metallb-operator-webhook-server-547776db9-454dl\" (UID: \"0f1ca361-a3c2-45c2-86ef-a32c06fe6476\") " pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.070151 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.302418 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64"] Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.372572 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-547776db9-454dl"] Nov 25 08:23:14 crc kubenswrapper[4760]: W1125 08:23:14.379704 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f1ca361_a3c2_45c2_86ef_a32c06fe6476.slice/crio-f9b20eb5991503983b645a6042c0e876df416795a68bbe68527599f32c1bcddb WatchSource:0}: Error finding container f9b20eb5991503983b645a6042c0e876df416795a68bbe68527599f32c1bcddb: Status 404 returned error can't find the container with id f9b20eb5991503983b645a6042c0e876df416795a68bbe68527599f32c1bcddb Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.523432 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" event={"ID":"394da4a0-f1c0-45c3-a31b-9cace1180c53","Type":"ContainerStarted","Data":"568a06cb985f49f30b1b3929e979b2aad0799e15ff3a18b06bc30b098df80dc6"} Nov 25 08:23:14 crc kubenswrapper[4760]: I1125 08:23:14.524442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" event={"ID":"0f1ca361-a3c2-45c2-86ef-a32c06fe6476","Type":"ContainerStarted","Data":"f9b20eb5991503983b645a6042c0e876df416795a68bbe68527599f32c1bcddb"} Nov 25 08:23:20 crc kubenswrapper[4760]: I1125 08:23:20.567608 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" event={"ID":"394da4a0-f1c0-45c3-a31b-9cace1180c53","Type":"ContainerStarted","Data":"6cc6b60bae09c6fcf7bce286981c52b8bfa986c423e015710f5d573f8ae10db2"} Nov 25 08:23:20 crc kubenswrapper[4760]: I1125 08:23:20.568462 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:20 crc kubenswrapper[4760]: I1125 08:23:20.569380 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" event={"ID":"0f1ca361-a3c2-45c2-86ef-a32c06fe6476","Type":"ContainerStarted","Data":"e5c622f45ef9e20e6944e206d4f16c31d1f5fb484bb60f2b4eebff62518eef1a"} Nov 25 08:23:20 crc kubenswrapper[4760]: I1125 08:23:20.569557 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:20 crc kubenswrapper[4760]: I1125 08:23:20.585678 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" podStartSLOduration=2.41150782 podStartE2EDuration="7.585662008s" podCreationTimestamp="2025-11-25 08:23:13 +0000 UTC" firstStartedPulling="2025-11-25 08:23:14.30982207 +0000 UTC m=+728.018852865" lastFinishedPulling="2025-11-25 08:23:19.483976238 +0000 UTC m=+733.193007053" observedRunningTime="2025-11-25 08:23:20.585087821 +0000 UTC m=+734.294118616" watchObservedRunningTime="2025-11-25 08:23:20.585662008 +0000 UTC m=+734.294692803" Nov 25 08:23:20 crc kubenswrapper[4760]: I1125 08:23:20.607386 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" podStartSLOduration=2.487306493 podStartE2EDuration="7.607369716s" podCreationTimestamp="2025-11-25 08:23:13 +0000 UTC" firstStartedPulling="2025-11-25 08:23:14.382213064 +0000 UTC m=+728.091243859" lastFinishedPulling="2025-11-25 08:23:19.502276287 +0000 UTC m=+733.211307082" observedRunningTime="2025-11-25 08:23:20.603999938 +0000 UTC m=+734.313030733" watchObservedRunningTime="2025-11-25 08:23:20.607369716 +0000 UTC m=+734.316400511" Nov 25 08:23:34 crc kubenswrapper[4760]: I1125 08:23:34.076020 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-547776db9-454dl" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.058511 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-trtpm"] Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.059377 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" podUID="6081bf3c-671c-46d5-8fbf-df633064cbe7" containerName="controller-manager" containerID="cri-o://7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87" gracePeriod=30 Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.160175 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44"] Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.160753 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" podUID="773a65eb-f881-42b1-a499-9dd15265f638" containerName="route-controller-manager" containerID="cri-o://be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08" gracePeriod=30 Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.522989 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.577783 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mfsm\" (UniqueName: \"kubernetes.io/projected/6081bf3c-671c-46d5-8fbf-df633064cbe7-kube-api-access-5mfsm\") pod \"6081bf3c-671c-46d5-8fbf-df633064cbe7\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-client-ca\") pod \"773a65eb-f881-42b1-a499-9dd15265f638\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694401 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6081bf3c-671c-46d5-8fbf-df633064cbe7-serving-cert\") pod \"6081bf3c-671c-46d5-8fbf-df633064cbe7\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a65eb-f881-42b1-a499-9dd15265f638-serving-cert\") pod \"773a65eb-f881-42b1-a499-9dd15265f638\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694493 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-config\") pod \"6081bf3c-671c-46d5-8fbf-df633064cbe7\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-client-ca\") pod \"6081bf3c-671c-46d5-8fbf-df633064cbe7\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694611 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-config\") pod \"773a65eb-f881-42b1-a499-9dd15265f638\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndw88\" (UniqueName: \"kubernetes.io/projected/773a65eb-f881-42b1-a499-9dd15265f638-kube-api-access-ndw88\") pod \"773a65eb-f881-42b1-a499-9dd15265f638\" (UID: \"773a65eb-f881-42b1-a499-9dd15265f638\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.694680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-proxy-ca-bundles\") pod \"6081bf3c-671c-46d5-8fbf-df633064cbe7\" (UID: \"6081bf3c-671c-46d5-8fbf-df633064cbe7\") " Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.695204 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-client-ca" (OuterVolumeSpecName: "client-ca") pod "773a65eb-f881-42b1-a499-9dd15265f638" (UID: "773a65eb-f881-42b1-a499-9dd15265f638"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.695797 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-config" (OuterVolumeSpecName: "config") pod "773a65eb-f881-42b1-a499-9dd15265f638" (UID: "773a65eb-f881-42b1-a499-9dd15265f638"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.695839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-config" (OuterVolumeSpecName: "config") pod "6081bf3c-671c-46d5-8fbf-df633064cbe7" (UID: "6081bf3c-671c-46d5-8fbf-df633064cbe7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.696083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-client-ca" (OuterVolumeSpecName: "client-ca") pod "6081bf3c-671c-46d5-8fbf-df633064cbe7" (UID: "6081bf3c-671c-46d5-8fbf-df633064cbe7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.696496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6081bf3c-671c-46d5-8fbf-df633064cbe7" (UID: "6081bf3c-671c-46d5-8fbf-df633064cbe7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.700609 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773a65eb-f881-42b1-a499-9dd15265f638-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "773a65eb-f881-42b1-a499-9dd15265f638" (UID: "773a65eb-f881-42b1-a499-9dd15265f638"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.700678 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6081bf3c-671c-46d5-8fbf-df633064cbe7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6081bf3c-671c-46d5-8fbf-df633064cbe7" (UID: "6081bf3c-671c-46d5-8fbf-df633064cbe7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.700685 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6081bf3c-671c-46d5-8fbf-df633064cbe7-kube-api-access-5mfsm" (OuterVolumeSpecName: "kube-api-access-5mfsm") pod "6081bf3c-671c-46d5-8fbf-df633064cbe7" (UID: "6081bf3c-671c-46d5-8fbf-df633064cbe7"). InnerVolumeSpecName "kube-api-access-5mfsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.701423 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773a65eb-f881-42b1-a499-9dd15265f638-kube-api-access-ndw88" (OuterVolumeSpecName: "kube-api-access-ndw88") pod "773a65eb-f881-42b1-a499-9dd15265f638" (UID: "773a65eb-f881-42b1-a499-9dd15265f638"). InnerVolumeSpecName "kube-api-access-ndw88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.706346 4760 generic.go:334] "Generic (PLEG): container finished" podID="6081bf3c-671c-46d5-8fbf-df633064cbe7" containerID="7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87" exitCode=0 Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.706404 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.706450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" event={"ID":"6081bf3c-671c-46d5-8fbf-df633064cbe7","Type":"ContainerDied","Data":"7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87"} Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.706496 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-trtpm" event={"ID":"6081bf3c-671c-46d5-8fbf-df633064cbe7","Type":"ContainerDied","Data":"9e39f0769d491a8afb3f18c4fcd849ccee93161d6e625cbb71fe19ecab608a1d"} Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.706517 4760 scope.go:117] "RemoveContainer" containerID="7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.707742 4760 generic.go:334] "Generic (PLEG): container finished" podID="773a65eb-f881-42b1-a499-9dd15265f638" containerID="be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08" exitCode=0 Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.707779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" event={"ID":"773a65eb-f881-42b1-a499-9dd15265f638","Type":"ContainerDied","Data":"be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08"} Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.707803 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" event={"ID":"773a65eb-f881-42b1-a499-9dd15265f638","Type":"ContainerDied","Data":"e0c7e8c18ec20fc5659c0b6062fdda9c19d945074d2ef0e0c2c6477921998cb7"} Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.707801 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.726204 4760 scope.go:117] "RemoveContainer" containerID="7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87" Nov 25 08:23:44 crc kubenswrapper[4760]: E1125 08:23:44.726673 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87\": container with ID starting with 7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87 not found: ID does not exist" containerID="7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.726712 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87"} err="failed to get container status \"7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87\": rpc error: code = NotFound desc = could not find container \"7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87\": container with ID starting with 7dd2d7c93b89fb7dd93197baf5a76f1841facbb1be8120f68fc5b47fdfa0cc87 not found: ID does not exist" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.726736 4760 scope.go:117] "RemoveContainer" containerID="be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.738049 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-trtpm"] Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.741554 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-trtpm"] Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.746319 4760 scope.go:117] "RemoveContainer" containerID="be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08" Nov 25 08:23:44 crc kubenswrapper[4760]: E1125 08:23:44.746871 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08\": container with ID starting with be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08 not found: ID does not exist" containerID="be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.746901 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08"} err="failed to get container status \"be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08\": rpc error: code = NotFound desc = could not find container \"be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08\": container with ID starting with be64782313e19f0d7be3fd44823dad62721c384ed07f2a3ceef124d5d2e01b08 not found: ID does not exist" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.753045 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44"] Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.756561 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tss44"] Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795558 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795595 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6081bf3c-671c-46d5-8fbf-df633064cbe7-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795604 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/773a65eb-f881-42b1-a499-9dd15265f638-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795617 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795625 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795633 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773a65eb-f881-42b1-a499-9dd15265f638-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795645 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndw88\" (UniqueName: \"kubernetes.io/projected/773a65eb-f881-42b1-a499-9dd15265f638-kube-api-access-ndw88\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795654 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6081bf3c-671c-46d5-8fbf-df633064cbe7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.795665 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mfsm\" (UniqueName: \"kubernetes.io/projected/6081bf3c-671c-46d5-8fbf-df633064cbe7-kube-api-access-5mfsm\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.961853 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6081bf3c-671c-46d5-8fbf-df633064cbe7" path="/var/lib/kubelet/pods/6081bf3c-671c-46d5-8fbf-df633064cbe7/volumes" Nov 25 08:23:44 crc kubenswrapper[4760]: I1125 08:23:44.962535 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773a65eb-f881-42b1-a499-9dd15265f638" path="/var/lib/kubelet/pods/773a65eb-f881-42b1-a499-9dd15265f638/volumes" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.470211 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp"] Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.470467 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6081bf3c-671c-46d5-8fbf-df633064cbe7" containerName="controller-manager" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.470481 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6081bf3c-671c-46d5-8fbf-df633064cbe7" containerName="controller-manager" Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.470496 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773a65eb-f881-42b1-a499-9dd15265f638" containerName="route-controller-manager" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.470503 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="773a65eb-f881-42b1-a499-9dd15265f638" containerName="route-controller-manager" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.470604 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6081bf3c-671c-46d5-8fbf-df633064cbe7" containerName="controller-manager" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.470620 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="773a65eb-f881-42b1-a499-9dd15265f638" containerName="route-controller-manager" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.471015 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: W1125 08:23:45.474322 4760 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.474361 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.474452 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cc48594b7-8xgvp"] Nov 25 08:23:45 crc kubenswrapper[4760]: W1125 08:23:45.474621 4760 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.474663 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:23:45 crc kubenswrapper[4760]: W1125 08:23:45.474738 4760 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 25 08:23:45 crc kubenswrapper[4760]: W1125 08:23:45.474742 4760 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 25 08:23:45 crc kubenswrapper[4760]: W1125 08:23:45.474757 4760 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.474789 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.474753 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.474802 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:23:45 crc kubenswrapper[4760]: W1125 08:23:45.474748 4760 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 25 08:23:45 crc kubenswrapper[4760]: E1125 08:23:45.474833 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.475229 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.479146 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.479926 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.480117 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.480293 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.480514 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.482807 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.489898 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.495347 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc48594b7-8xgvp"] Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.503950 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmsc\" (UniqueName: \"kubernetes.io/projected/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-kube-api-access-mcmsc\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504047 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-config\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504106 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-client-ca\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504125 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-serving-cert\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-client-ca\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504192 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcf8\" (UniqueName: \"kubernetes.io/projected/397596a0-6c91-47d6-8687-6cd69a473abe-kube-api-access-hgcf8\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504309 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp"] Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504321 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-proxy-ca-bundles\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504379 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397596a0-6c91-47d6-8687-6cd69a473abe-serving-cert\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.504404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-config\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-proxy-ca-bundles\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397596a0-6c91-47d6-8687-6cd69a473abe-serving-cert\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-config\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcmsc\" (UniqueName: \"kubernetes.io/projected/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-kube-api-access-mcmsc\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-config\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605324 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-client-ca\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-serving-cert\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-client-ca\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.605404 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgcf8\" (UniqueName: \"kubernetes.io/projected/397596a0-6c91-47d6-8687-6cd69a473abe-kube-api-access-hgcf8\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.606611 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-config\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.606672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-proxy-ca-bundles\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.607117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/397596a0-6c91-47d6-8687-6cd69a473abe-client-ca\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.613030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/397596a0-6c91-47d6-8687-6cd69a473abe-serving-cert\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.623723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgcf8\" (UniqueName: \"kubernetes.io/projected/397596a0-6c91-47d6-8687-6cd69a473abe-kube-api-access-hgcf8\") pod \"controller-manager-7cc48594b7-8xgvp\" (UID: \"397596a0-6c91-47d6-8687-6cd69a473abe\") " pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:45 crc kubenswrapper[4760]: I1125 08:23:45.803324 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.215643 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc48594b7-8xgvp"] Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.380407 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.387463 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-config\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.395645 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 08:23:46 crc kubenswrapper[4760]: E1125 08:23:46.606507 4760 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 08:23:46 crc kubenswrapper[4760]: E1125 08:23:46.606569 4760 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Nov 25 08:23:46 crc kubenswrapper[4760]: E1125 08:23:46.606596 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-serving-cert podName:db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53 nodeName:}" failed. No retries permitted until 2025-11-25 08:23:47.106574671 +0000 UTC m=+760.815605466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-serving-cert") pod "route-controller-manager-6c7dd549d5-6dmlp" (UID: "db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53") : failed to sync secret cache: timed out waiting for the condition Nov 25 08:23:46 crc kubenswrapper[4760]: E1125 08:23:46.606644 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-client-ca podName:db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53 nodeName:}" failed. No retries permitted until 2025-11-25 08:23:47.106622873 +0000 UTC m=+760.815653668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-client-ca") pod "route-controller-manager-6c7dd549d5-6dmlp" (UID: "db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53") : failed to sync configmap cache: timed out waiting for the condition Nov 25 08:23:46 crc kubenswrapper[4760]: E1125 08:23:46.619362 4760 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 25 08:23:46 crc kubenswrapper[4760]: E1125 08:23:46.619419 4760 projected.go:194] Error preparing data for projected volume kube-api-access-mcmsc for pod openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp: failed to sync configmap cache: timed out waiting for the condition Nov 25 08:23:46 crc kubenswrapper[4760]: E1125 08:23:46.619484 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-kube-api-access-mcmsc podName:db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53 nodeName:}" failed. No retries permitted until 2025-11-25 08:23:47.119462234 +0000 UTC m=+760.828493029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mcmsc" (UniqueName: "kubernetes.io/projected/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-kube-api-access-mcmsc") pod "route-controller-manager-6c7dd549d5-6dmlp" (UID: "db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53") : failed to sync configmap cache: timed out waiting for the condition Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.673012 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.723491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" event={"ID":"397596a0-6c91-47d6-8687-6cd69a473abe","Type":"ContainerStarted","Data":"414b627d6c3549998985513f234cdb9fa5c9ea4566a45d7a28bccea7132d052a"} Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.723536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" event={"ID":"397596a0-6c91-47d6-8687-6cd69a473abe","Type":"ContainerStarted","Data":"66b1d3261be245b9a5faa3a2c2485605e5bcb4371890bd6938087e739a458cdd"} Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.723733 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.727849 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.764125 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cc48594b7-8xgvp" podStartSLOduration=2.764106767 podStartE2EDuration="2.764106767s" podCreationTimestamp="2025-11-25 08:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:23:46.744403477 +0000 UTC m=+760.453434262" watchObservedRunningTime="2025-11-25 08:23:46.764106767 +0000 UTC m=+760.473137562" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.910504 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 08:23:46 crc kubenswrapper[4760]: I1125 08:23:46.968577 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.067154 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.123484 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcmsc\" (UniqueName: \"kubernetes.io/projected/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-kube-api-access-mcmsc\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.123856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-serving-cert\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.123895 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-client-ca\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.124632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-client-ca\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.129213 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcmsc\" (UniqueName: \"kubernetes.io/projected/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-kube-api-access-mcmsc\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.129856 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53-serving-cert\") pod \"route-controller-manager-6c7dd549d5-6dmlp\" (UID: \"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\") " pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.290760 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:47 crc kubenswrapper[4760]: I1125 08:23:47.727813 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp"] Nov 25 08:23:47 crc kubenswrapper[4760]: W1125 08:23:47.730023 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb0f8a2c_ba6f_449d_a264_cc0c7e0c5e53.slice/crio-ca1470416d90d1acb3258ab53fd4f2b65503b58db1dd1cb687420058f6ef332e WatchSource:0}: Error finding container ca1470416d90d1acb3258ab53fd4f2b65503b58db1dd1cb687420058f6ef332e: Status 404 returned error can't find the container with id ca1470416d90d1acb3258ab53fd4f2b65503b58db1dd1cb687420058f6ef332e Nov 25 08:23:48 crc kubenswrapper[4760]: I1125 08:23:48.737546 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" event={"ID":"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53","Type":"ContainerStarted","Data":"c9abe3f7de9820d7f3fc93a3fbff2be685a6b52388e49c15d407e18dee1a3af7"} Nov 25 08:23:48 crc kubenswrapper[4760]: I1125 08:23:48.737621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" event={"ID":"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53","Type":"ContainerStarted","Data":"ca1470416d90d1acb3258ab53fd4f2b65503b58db1dd1cb687420058f6ef332e"} Nov 25 08:23:48 crc kubenswrapper[4760]: I1125 08:23:48.760118 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" podStartSLOduration=4.760083231 podStartE2EDuration="4.760083231s" podCreationTimestamp="2025-11-25 08:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:23:48.755848069 +0000 UTC m=+762.464878864" watchObservedRunningTime="2025-11-25 08:23:48.760083231 +0000 UTC m=+762.469114026" Nov 25 08:23:49 crc kubenswrapper[4760]: I1125 08:23:49.742050 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:49 crc kubenswrapper[4760]: I1125 08:23:49.747622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c7dd549d5-6dmlp" Nov 25 08:23:51 crc kubenswrapper[4760]: I1125 08:23:51.943730 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 08:23:53 crc kubenswrapper[4760]: I1125 08:23:53.833751 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.527853 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pw649"] Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.530023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.532093 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5dbjm" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.532331 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.532442 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.535689 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-fzx95"] Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.536711 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.539342 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.551633 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-fzx95"] Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.628853 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-m2nhl"] Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.630004 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.633912 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.634537 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.634562 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-78cf4" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.634739 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.645486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-reloader\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.645549 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-sockets\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.645586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics-certs\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.645612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5p4\" (UniqueName: \"kubernetes.io/projected/6deb0467-1ded-4513-8aad-5a7b6c671895-kube-api-access-wh5p4\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.645878 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.646119 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-conf\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.646187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3531211f-bf66-45cb-9c5f-4a7aca2efbad-cert\") pod \"frr-k8s-webhook-server-6998585d5-fzx95\" (UID: \"3531211f-bf66-45cb-9c5f-4a7aca2efbad\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.646242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7kdn\" (UniqueName: \"kubernetes.io/projected/3531211f-bf66-45cb-9c5f-4a7aca2efbad-kube-api-access-z7kdn\") pod \"frr-k8s-webhook-server-6998585d5-fzx95\" (UID: \"3531211f-bf66-45cb-9c5f-4a7aca2efbad\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.646417 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-startup\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.654558 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-wdjm7"] Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.656456 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.658673 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.679444 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-wdjm7"] Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.747897 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbvv\" (UniqueName: \"kubernetes.io/projected/e911dae6-d9ed-40d3-802a-e536e5258829-kube-api-access-bfbvv\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.747955 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44dac91a-5352-4392-ab9b-49c59e38409f-metallb-excludel2\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748001 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25tf2\" (UniqueName: \"kubernetes.io/projected/44dac91a-5352-4392-ab9b-49c59e38409f-kube-api-access-25tf2\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-startup\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-metrics-certs\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748300 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-reloader\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748497 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-sockets\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics-certs\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748586 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5p4\" (UniqueName: \"kubernetes.io/projected/6deb0467-1ded-4513-8aad-5a7b6c671895-kube-api-access-wh5p4\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748623 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-cert\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: E1125 08:23:54.748735 4760 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-conf\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: E1125 08:23:54.748814 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics-certs podName:6deb0467-1ded-4513-8aad-5a7b6c671895 nodeName:}" failed. No retries permitted until 2025-11-25 08:23:55.248793415 +0000 UTC m=+768.957824210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics-certs") pod "frr-k8s-pw649" (UID: "6deb0467-1ded-4513-8aad-5a7b6c671895") : secret "frr-k8s-certs-secret" not found Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3531211f-bf66-45cb-9c5f-4a7aca2efbad-cert\") pod \"frr-k8s-webhook-server-6998585d5-fzx95\" (UID: \"3531211f-bf66-45cb-9c5f-4a7aca2efbad\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-metrics-certs\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748952 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-reloader\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.748974 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7kdn\" (UniqueName: \"kubernetes.io/projected/3531211f-bf66-45cb-9c5f-4a7aca2efbad-kube-api-access-z7kdn\") pod \"frr-k8s-webhook-server-6998585d5-fzx95\" (UID: \"3531211f-bf66-45cb-9c5f-4a7aca2efbad\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.749030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-sockets\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.749115 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-conf\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.749221 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.749603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6deb0467-1ded-4513-8aad-5a7b6c671895-frr-startup\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.758710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3531211f-bf66-45cb-9c5f-4a7aca2efbad-cert\") pod \"frr-k8s-webhook-server-6998585d5-fzx95\" (UID: \"3531211f-bf66-45cb-9c5f-4a7aca2efbad\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.774917 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5p4\" (UniqueName: \"kubernetes.io/projected/6deb0467-1ded-4513-8aad-5a7b6c671895-kube-api-access-wh5p4\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.808897 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7kdn\" (UniqueName: \"kubernetes.io/projected/3531211f-bf66-45cb-9c5f-4a7aca2efbad-kube-api-access-z7kdn\") pod \"frr-k8s-webhook-server-6998585d5-fzx95\" (UID: \"3531211f-bf66-45cb-9c5f-4a7aca2efbad\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.850014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-cert\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.850090 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-metrics-certs\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.850125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbvv\" (UniqueName: \"kubernetes.io/projected/e911dae6-d9ed-40d3-802a-e536e5258829-kube-api-access-bfbvv\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.850151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44dac91a-5352-4392-ab9b-49c59e38409f-metallb-excludel2\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.850196 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25tf2\" (UniqueName: \"kubernetes.io/projected/44dac91a-5352-4392-ab9b-49c59e38409f-kube-api-access-25tf2\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.850229 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-metrics-certs\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.850268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: E1125 08:23:54.850443 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 08:23:54 crc kubenswrapper[4760]: E1125 08:23:54.850506 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist podName:44dac91a-5352-4392-ab9b-49c59e38409f nodeName:}" failed. No retries permitted until 2025-11-25 08:23:55.350487106 +0000 UTC m=+769.059517901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist") pod "speaker-m2nhl" (UID: "44dac91a-5352-4392-ab9b-49c59e38409f") : secret "metallb-memberlist" not found Nov 25 08:23:54 crc kubenswrapper[4760]: E1125 08:23:54.851169 4760 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 25 08:23:54 crc kubenswrapper[4760]: E1125 08:23:54.851226 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-metrics-certs podName:e911dae6-d9ed-40d3-802a-e536e5258829 nodeName:}" failed. No retries permitted until 2025-11-25 08:23:55.351212937 +0000 UTC m=+769.060243732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-metrics-certs") pod "controller-6c7b4b5f48-wdjm7" (UID: "e911dae6-d9ed-40d3-802a-e536e5258829") : secret "controller-certs-secret" not found Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.851427 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44dac91a-5352-4392-ab9b-49c59e38409f-metallb-excludel2\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.852826 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-cert\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.855663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-metrics-certs\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.864239 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.877058 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25tf2\" (UniqueName: \"kubernetes.io/projected/44dac91a-5352-4392-ab9b-49c59e38409f-kube-api-access-25tf2\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:54 crc kubenswrapper[4760]: I1125 08:23:54.881617 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbvv\" (UniqueName: \"kubernetes.io/projected/e911dae6-d9ed-40d3-802a-e536e5258829-kube-api-access-bfbvv\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.259843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics-certs\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.266238 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-fzx95"] Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.266715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6deb0467-1ded-4513-8aad-5a7b6c671895-metrics-certs\") pod \"frr-k8s-pw649\" (UID: \"6deb0467-1ded-4513-8aad-5a7b6c671895\") " pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.361579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-metrics-certs\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.361635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:55 crc kubenswrapper[4760]: E1125 08:23:55.361781 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 08:23:55 crc kubenswrapper[4760]: E1125 08:23:55.361846 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist podName:44dac91a-5352-4392-ab9b-49c59e38409f nodeName:}" failed. No retries permitted until 2025-11-25 08:23:56.361825394 +0000 UTC m=+770.070856189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist") pod "speaker-m2nhl" (UID: "44dac91a-5352-4392-ab9b-49c59e38409f") : secret "metallb-memberlist" not found Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.366029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e911dae6-d9ed-40d3-802a-e536e5258829-metrics-certs\") pod \"controller-6c7b4b5f48-wdjm7\" (UID: \"e911dae6-d9ed-40d3-802a-e536e5258829\") " pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.449304 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pw649" Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.571517 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.787368 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerStarted","Data":"1eb422afa5aeb2e1a9ff5ad1720cba01b2d67eb0c8dad73ec9eeabf0c5a2d6f7"} Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.788334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" event={"ID":"3531211f-bf66-45cb-9c5f-4a7aca2efbad","Type":"ContainerStarted","Data":"444309d96997a6d1406eba33a8fe357bc4dd241a18a86ed797c05bead180ae44"} Nov 25 08:23:55 crc kubenswrapper[4760]: I1125 08:23:55.972600 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-wdjm7"] Nov 25 08:23:55 crc kubenswrapper[4760]: W1125 08:23:55.980291 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode911dae6_d9ed_40d3_802a_e536e5258829.slice/crio-6f3432702661ac9e532f39339bf42118a406e2560e1a85af6c7fc0f62e5aa8bd WatchSource:0}: Error finding container 6f3432702661ac9e532f39339bf42118a406e2560e1a85af6c7fc0f62e5aa8bd: Status 404 returned error can't find the container with id 6f3432702661ac9e532f39339bf42118a406e2560e1a85af6c7fc0f62e5aa8bd Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.378642 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.395510 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44dac91a-5352-4392-ab9b-49c59e38409f-memberlist\") pod \"speaker-m2nhl\" (UID: \"44dac91a-5352-4392-ab9b-49c59e38409f\") " pod="metallb-system/speaker-m2nhl" Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.446279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m2nhl" Nov 25 08:23:56 crc kubenswrapper[4760]: W1125 08:23:56.466888 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44dac91a_5352_4392_ab9b_49c59e38409f.slice/crio-f0ad666663d43d8ed9f4f018032821000316722b1bd8c76c444727a664ca05f6 WatchSource:0}: Error finding container f0ad666663d43d8ed9f4f018032821000316722b1bd8c76c444727a664ca05f6: Status 404 returned error can't find the container with id f0ad666663d43d8ed9f4f018032821000316722b1bd8c76c444727a664ca05f6 Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.808215 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m2nhl" event={"ID":"44dac91a-5352-4392-ab9b-49c59e38409f","Type":"ContainerStarted","Data":"f7caf435ff12d67072ff19955651a173ddbd98ecc3bf7c256d1df2399a40216d"} Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.808282 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m2nhl" event={"ID":"44dac91a-5352-4392-ab9b-49c59e38409f","Type":"ContainerStarted","Data":"f0ad666663d43d8ed9f4f018032821000316722b1bd8c76c444727a664ca05f6"} Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.810454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-wdjm7" event={"ID":"e911dae6-d9ed-40d3-802a-e536e5258829","Type":"ContainerStarted","Data":"d37637afc526fffe9665af15e063e95d93322ea0af5e73fbfe639cf3b373b006"} Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.810496 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-wdjm7" event={"ID":"e911dae6-d9ed-40d3-802a-e536e5258829","Type":"ContainerStarted","Data":"72c30886c76d0cb26f8dde17f0ec54079bb4a82c6338314d91b56a73259a2ef9"} Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.810526 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-wdjm7" event={"ID":"e911dae6-d9ed-40d3-802a-e536e5258829","Type":"ContainerStarted","Data":"6f3432702661ac9e532f39339bf42118a406e2560e1a85af6c7fc0f62e5aa8bd"} Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.810597 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:23:56 crc kubenswrapper[4760]: I1125 08:23:56.830423 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-wdjm7" podStartSLOduration=2.830408425 podStartE2EDuration="2.830408425s" podCreationTimestamp="2025-11-25 08:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:23:56.829781227 +0000 UTC m=+770.538812042" watchObservedRunningTime="2025-11-25 08:23:56.830408425 +0000 UTC m=+770.539439220" Nov 25 08:23:57 crc kubenswrapper[4760]: I1125 08:23:57.819776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m2nhl" event={"ID":"44dac91a-5352-4392-ab9b-49c59e38409f","Type":"ContainerStarted","Data":"52318256b5ff431942e7fa8d1c84aa4b66a19b491bdec87ce36177182722eda4"} Nov 25 08:23:57 crc kubenswrapper[4760]: I1125 08:23:57.820011 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-m2nhl" Nov 25 08:23:57 crc kubenswrapper[4760]: I1125 08:23:57.838389 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-m2nhl" podStartSLOduration=3.838367196 podStartE2EDuration="3.838367196s" podCreationTimestamp="2025-11-25 08:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:23:57.838215421 +0000 UTC m=+771.547246246" watchObservedRunningTime="2025-11-25 08:23:57.838367196 +0000 UTC m=+771.547398001" Nov 25 08:24:01 crc kubenswrapper[4760]: I1125 08:24:01.746017 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:24:01 crc kubenswrapper[4760]: I1125 08:24:01.746369 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:24:02 crc kubenswrapper[4760]: I1125 08:24:02.854354 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" event={"ID":"3531211f-bf66-45cb-9c5f-4a7aca2efbad","Type":"ContainerStarted","Data":"a7a93c0ea307b68e2a3a6ad02269df141855fc9067910badf2a7ee5220e247f2"} Nov 25 08:24:02 crc kubenswrapper[4760]: I1125 08:24:02.854855 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:24:02 crc kubenswrapper[4760]: I1125 08:24:02.856288 4760 generic.go:334] "Generic (PLEG): container finished" podID="6deb0467-1ded-4513-8aad-5a7b6c671895" containerID="1177bfa7d4fc2461a5e445ad06a0ece5b73df44d60bd0ab31ab5062a483f1e5c" exitCode=0 Nov 25 08:24:02 crc kubenswrapper[4760]: I1125 08:24:02.856380 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerDied","Data":"1177bfa7d4fc2461a5e445ad06a0ece5b73df44d60bd0ab31ab5062a483f1e5c"} Nov 25 08:24:02 crc kubenswrapper[4760]: I1125 08:24:02.873098 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" podStartSLOduration=2.410496852 podStartE2EDuration="8.87307796s" podCreationTimestamp="2025-11-25 08:23:54 +0000 UTC" firstStartedPulling="2025-11-25 08:23:55.274597671 +0000 UTC m=+768.983628466" lastFinishedPulling="2025-11-25 08:24:01.737178779 +0000 UTC m=+775.446209574" observedRunningTime="2025-11-25 08:24:02.869001892 +0000 UTC m=+776.578032687" watchObservedRunningTime="2025-11-25 08:24:02.87307796 +0000 UTC m=+776.582108755" Nov 25 08:24:03 crc kubenswrapper[4760]: I1125 08:24:03.865536 4760 generic.go:334] "Generic (PLEG): container finished" podID="6deb0467-1ded-4513-8aad-5a7b6c671895" containerID="6fece720ab8a98c2b9d9827bdf004d4177a7f9cbe83b69dd25bdea5002bea3c7" exitCode=0 Nov 25 08:24:03 crc kubenswrapper[4760]: I1125 08:24:03.865674 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerDied","Data":"6fece720ab8a98c2b9d9827bdf004d4177a7f9cbe83b69dd25bdea5002bea3c7"} Nov 25 08:24:04 crc kubenswrapper[4760]: I1125 08:24:04.875074 4760 generic.go:334] "Generic (PLEG): container finished" podID="6deb0467-1ded-4513-8aad-5a7b6c671895" containerID="49a897aa5e2e4360e5daee8fe302fcf2ac07790282816fb43e6f2e54fa94f9ff" exitCode=0 Nov 25 08:24:04 crc kubenswrapper[4760]: I1125 08:24:04.875136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerDied","Data":"49a897aa5e2e4360e5daee8fe302fcf2ac07790282816fb43e6f2e54fa94f9ff"} Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.888623 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerStarted","Data":"c9852805f44e826e95944d3b1e4de8a1922904434114cb89a11b3a739a223185"} Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.888978 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerStarted","Data":"ece8e23fc733fcf954864119fee7e594a74e8f1f80249b82cec84fe01f4e52b2"} Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.889002 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pw649" Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.889018 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerStarted","Data":"c4ba97365997aacf5ae3200cb2387f88b52a3d74d0a04e3a63c999dfab2abc96"} Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.889031 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerStarted","Data":"8d3b302bce6af32824ffb8839c43eced88d3475536ac751b1b79efd79212446e"} Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.889042 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerStarted","Data":"702e5cfc872e170edafc167492c3576b33a329c83c9a93f4b580784280ac8cc5"} Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.889107 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pw649" event={"ID":"6deb0467-1ded-4513-8aad-5a7b6c671895","Type":"ContainerStarted","Data":"c2c47f7988bee3f3b90665d9ae7219ec07c463112486a12018637558844772bc"} Nov 25 08:24:05 crc kubenswrapper[4760]: I1125 08:24:05.910562 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pw649" podStartSLOduration=5.785792486 podStartE2EDuration="11.910477502s" podCreationTimestamp="2025-11-25 08:23:54 +0000 UTC" firstStartedPulling="2025-11-25 08:23:55.589228111 +0000 UTC m=+769.298258906" lastFinishedPulling="2025-11-25 08:24:01.713913127 +0000 UTC m=+775.422943922" observedRunningTime="2025-11-25 08:24:05.907279619 +0000 UTC m=+779.616310424" watchObservedRunningTime="2025-11-25 08:24:05.910477502 +0000 UTC m=+779.619508307" Nov 25 08:24:06 crc kubenswrapper[4760]: I1125 08:24:06.449711 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-m2nhl" Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.279811 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5xjw7"] Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.281218 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xjw7" Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.283164 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.284599 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.299194 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5xjw7"] Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.365745 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9xh\" (UniqueName: \"kubernetes.io/projected/ac629990-7360-4846-b109-f01239b15bda-kube-api-access-qd9xh\") pod \"openstack-operator-index-5xjw7\" (UID: \"ac629990-7360-4846-b109-f01239b15bda\") " pod="openstack-operators/openstack-operator-index-5xjw7" Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.467218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9xh\" (UniqueName: \"kubernetes.io/projected/ac629990-7360-4846-b109-f01239b15bda-kube-api-access-qd9xh\") pod \"openstack-operator-index-5xjw7\" (UID: \"ac629990-7360-4846-b109-f01239b15bda\") " pod="openstack-operators/openstack-operator-index-5xjw7" Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.490422 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9xh\" (UniqueName: \"kubernetes.io/projected/ac629990-7360-4846-b109-f01239b15bda-kube-api-access-qd9xh\") pod \"openstack-operator-index-5xjw7\" (UID: \"ac629990-7360-4846-b109-f01239b15bda\") " pod="openstack-operators/openstack-operator-index-5xjw7" Nov 25 08:24:09 crc kubenswrapper[4760]: I1125 08:24:09.607028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xjw7" Nov 25 08:24:10 crc kubenswrapper[4760]: I1125 08:24:10.001616 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5xjw7"] Nov 25 08:24:10 crc kubenswrapper[4760]: W1125 08:24:10.007397 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac629990_7360_4846_b109_f01239b15bda.slice/crio-bcf19d897ca3b6497c01d6ac6ad05b772e8de17865ed286bb5ffafa94b478df6 WatchSource:0}: Error finding container bcf19d897ca3b6497c01d6ac6ad05b772e8de17865ed286bb5ffafa94b478df6: Status 404 returned error can't find the container with id bcf19d897ca3b6497c01d6ac6ad05b772e8de17865ed286bb5ffafa94b478df6 Nov 25 08:24:10 crc kubenswrapper[4760]: I1125 08:24:10.450314 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pw649" Nov 25 08:24:10 crc kubenswrapper[4760]: I1125 08:24:10.488163 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pw649" Nov 25 08:24:10 crc kubenswrapper[4760]: I1125 08:24:10.918447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xjw7" event={"ID":"ac629990-7360-4846-b109-f01239b15bda","Type":"ContainerStarted","Data":"bcf19d897ca3b6497c01d6ac6ad05b772e8de17865ed286bb5ffafa94b478df6"} Nov 25 08:24:11 crc kubenswrapper[4760]: I1125 08:24:11.935671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xjw7" event={"ID":"ac629990-7360-4846-b109-f01239b15bda","Type":"ContainerStarted","Data":"c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c"} Nov 25 08:24:11 crc kubenswrapper[4760]: I1125 08:24:11.952008 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5xjw7" podStartSLOduration=1.9700130329999999 podStartE2EDuration="2.9519696s" podCreationTimestamp="2025-11-25 08:24:09 +0000 UTC" firstStartedPulling="2025-11-25 08:24:10.009648059 +0000 UTC m=+783.718678854" lastFinishedPulling="2025-11-25 08:24:10.991604626 +0000 UTC m=+784.700635421" observedRunningTime="2025-11-25 08:24:11.947590963 +0000 UTC m=+785.656621778" watchObservedRunningTime="2025-11-25 08:24:11.9519696 +0000 UTC m=+785.661000395" Nov 25 08:24:12 crc kubenswrapper[4760]: I1125 08:24:12.667220 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5xjw7"] Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.271037 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-w94z5"] Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.271697 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.275133 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cspft" Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.284871 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w94z5"] Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.315692 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q2hw\" (UniqueName: \"kubernetes.io/projected/7e50fb1c-ead6-4358-a11b-66963b307f3a-kube-api-access-7q2hw\") pod \"openstack-operator-index-w94z5\" (UID: \"7e50fb1c-ead6-4358-a11b-66963b307f3a\") " pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.417127 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q2hw\" (UniqueName: \"kubernetes.io/projected/7e50fb1c-ead6-4358-a11b-66963b307f3a-kube-api-access-7q2hw\") pod \"openstack-operator-index-w94z5\" (UID: \"7e50fb1c-ead6-4358-a11b-66963b307f3a\") " pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.437372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q2hw\" (UniqueName: \"kubernetes.io/projected/7e50fb1c-ead6-4358-a11b-66963b307f3a-kube-api-access-7q2hw\") pod \"openstack-operator-index-w94z5\" (UID: \"7e50fb1c-ead6-4358-a11b-66963b307f3a\") " pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.594558 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:13 crc kubenswrapper[4760]: I1125 08:24:13.947411 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-5xjw7" podUID="ac629990-7360-4846-b109-f01239b15bda" containerName="registry-server" containerID="cri-o://c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c" gracePeriod=2 Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.002989 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w94z5"] Nov 25 08:24:14 crc kubenswrapper[4760]: W1125 08:24:14.052299 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e50fb1c_ead6_4358_a11b_66963b307f3a.slice/crio-0e60fa38bc0c6110392ca5f6815a7740ce4f55aa7c51cc9b7a305ccf00267946 WatchSource:0}: Error finding container 0e60fa38bc0c6110392ca5f6815a7740ce4f55aa7c51cc9b7a305ccf00267946: Status 404 returned error can't find the container with id 0e60fa38bc0c6110392ca5f6815a7740ce4f55aa7c51cc9b7a305ccf00267946 Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.346735 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xjw7" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.429922 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd9xh\" (UniqueName: \"kubernetes.io/projected/ac629990-7360-4846-b109-f01239b15bda-kube-api-access-qd9xh\") pod \"ac629990-7360-4846-b109-f01239b15bda\" (UID: \"ac629990-7360-4846-b109-f01239b15bda\") " Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.435548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac629990-7360-4846-b109-f01239b15bda-kube-api-access-qd9xh" (OuterVolumeSpecName: "kube-api-access-qd9xh") pod "ac629990-7360-4846-b109-f01239b15bda" (UID: "ac629990-7360-4846-b109-f01239b15bda"). InnerVolumeSpecName "kube-api-access-qd9xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.531920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd9xh\" (UniqueName: \"kubernetes.io/projected/ac629990-7360-4846-b109-f01239b15bda-kube-api-access-qd9xh\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.870135 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-fzx95" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.953967 4760 generic.go:334] "Generic (PLEG): container finished" podID="ac629990-7360-4846-b109-f01239b15bda" containerID="c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c" exitCode=0 Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.954040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xjw7" event={"ID":"ac629990-7360-4846-b109-f01239b15bda","Type":"ContainerDied","Data":"c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c"} Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.954067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xjw7" event={"ID":"ac629990-7360-4846-b109-f01239b15bda","Type":"ContainerDied","Data":"bcf19d897ca3b6497c01d6ac6ad05b772e8de17865ed286bb5ffafa94b478df6"} Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.954073 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xjw7" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.954084 4760 scope.go:117] "RemoveContainer" containerID="c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.955980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w94z5" event={"ID":"7e50fb1c-ead6-4358-a11b-66963b307f3a","Type":"ContainerStarted","Data":"d112c3035ff903bd89925de3626f82b928cf16b980e160a66389b0b0765a3bf2"} Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.956028 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w94z5" event={"ID":"7e50fb1c-ead6-4358-a11b-66963b307f3a","Type":"ContainerStarted","Data":"0e60fa38bc0c6110392ca5f6815a7740ce4f55aa7c51cc9b7a305ccf00267946"} Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.967802 4760 scope.go:117] "RemoveContainer" containerID="c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c" Nov 25 08:24:14 crc kubenswrapper[4760]: E1125 08:24:14.968300 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c\": container with ID starting with c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c not found: ID does not exist" containerID="c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.968343 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c"} err="failed to get container status \"c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c\": rpc error: code = NotFound desc = could not find container \"c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c\": container with ID starting with c0d62f9674eb714fa236ce25bb7354aa1a597e23732220727d392c8373cd3c7c not found: ID does not exist" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.985065 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-w94z5" podStartSLOduration=1.53427676 podStartE2EDuration="1.985047345s" podCreationTimestamp="2025-11-25 08:24:13 +0000 UTC" firstStartedPulling="2025-11-25 08:24:14.056887754 +0000 UTC m=+787.765918549" lastFinishedPulling="2025-11-25 08:24:14.507658339 +0000 UTC m=+788.216689134" observedRunningTime="2025-11-25 08:24:14.974168641 +0000 UTC m=+788.683199426" watchObservedRunningTime="2025-11-25 08:24:14.985047345 +0000 UTC m=+788.694078140" Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.991460 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5xjw7"] Nov 25 08:24:14 crc kubenswrapper[4760]: I1125 08:24:14.992706 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-5xjw7"] Nov 25 08:24:15 crc kubenswrapper[4760]: I1125 08:24:15.455782 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pw649" Nov 25 08:24:15 crc kubenswrapper[4760]: I1125 08:24:15.574980 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-wdjm7" Nov 25 08:24:16 crc kubenswrapper[4760]: I1125 08:24:16.945821 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac629990-7360-4846-b109-f01239b15bda" path="/var/lib/kubelet/pods/ac629990-7360-4846-b109-f01239b15bda/volumes" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.479377 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6tjlx"] Nov 25 08:24:17 crc kubenswrapper[4760]: E1125 08:24:17.479666 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac629990-7360-4846-b109-f01239b15bda" containerName="registry-server" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.479681 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac629990-7360-4846-b109-f01239b15bda" containerName="registry-server" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.479820 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac629990-7360-4846-b109-f01239b15bda" containerName="registry-server" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.480704 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.482327 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tjlx"] Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.576979 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-utilities\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.577114 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7b7c\" (UniqueName: \"kubernetes.io/projected/e1c63720-7aff-4a6a-8127-549ce45ba3e3-kube-api-access-p7b7c\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.577160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-catalog-content\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.678388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-utilities\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.678488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7b7c\" (UniqueName: \"kubernetes.io/projected/e1c63720-7aff-4a6a-8127-549ce45ba3e3-kube-api-access-p7b7c\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.678519 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-catalog-content\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.679041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-utilities\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.679094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-catalog-content\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.696828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7b7c\" (UniqueName: \"kubernetes.io/projected/e1c63720-7aff-4a6a-8127-549ce45ba3e3-kube-api-access-p7b7c\") pod \"community-operators-6tjlx\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:17 crc kubenswrapper[4760]: I1125 08:24:17.796821 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:18 crc kubenswrapper[4760]: I1125 08:24:18.245777 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tjlx"] Nov 25 08:24:18 crc kubenswrapper[4760]: W1125 08:24:18.254363 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1c63720_7aff_4a6a_8127_549ce45ba3e3.slice/crio-2d72dd875ee693a450014cf199c7aa6d46f9e48c321fd31b54aca8dbd3378df7 WatchSource:0}: Error finding container 2d72dd875ee693a450014cf199c7aa6d46f9e48c321fd31b54aca8dbd3378df7: Status 404 returned error can't find the container with id 2d72dd875ee693a450014cf199c7aa6d46f9e48c321fd31b54aca8dbd3378df7 Nov 25 08:24:18 crc kubenswrapper[4760]: I1125 08:24:18.983095 4760 generic.go:334] "Generic (PLEG): container finished" podID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerID="4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520" exitCode=0 Nov 25 08:24:18 crc kubenswrapper[4760]: I1125 08:24:18.983146 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjlx" event={"ID":"e1c63720-7aff-4a6a-8127-549ce45ba3e3","Type":"ContainerDied","Data":"4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520"} Nov 25 08:24:18 crc kubenswrapper[4760]: I1125 08:24:18.983183 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjlx" event={"ID":"e1c63720-7aff-4a6a-8127-549ce45ba3e3","Type":"ContainerStarted","Data":"2d72dd875ee693a450014cf199c7aa6d46f9e48c321fd31b54aca8dbd3378df7"} Nov 25 08:24:19 crc kubenswrapper[4760]: I1125 08:24:19.873326 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ltxdn"] Nov 25 08:24:19 crc kubenswrapper[4760]: I1125 08:24:19.875462 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:19 crc kubenswrapper[4760]: I1125 08:24:19.881778 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltxdn"] Nov 25 08:24:19 crc kubenswrapper[4760]: I1125 08:24:19.913906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twztj\" (UniqueName: \"kubernetes.io/projected/5107cf72-3888-4d18-8afa-9e16fa0427d1-kube-api-access-twztj\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:19 crc kubenswrapper[4760]: I1125 08:24:19.913966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-utilities\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:19 crc kubenswrapper[4760]: I1125 08:24:19.913992 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-catalog-content\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:19 crc kubenswrapper[4760]: I1125 08:24:19.990166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjlx" event={"ID":"e1c63720-7aff-4a6a-8127-549ce45ba3e3","Type":"ContainerStarted","Data":"7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c"} Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.014683 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twztj\" (UniqueName: \"kubernetes.io/projected/5107cf72-3888-4d18-8afa-9e16fa0427d1-kube-api-access-twztj\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.015038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-utilities\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.015157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-catalog-content\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.015724 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-catalog-content\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.016075 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-utilities\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.034838 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twztj\" (UniqueName: \"kubernetes.io/projected/5107cf72-3888-4d18-8afa-9e16fa0427d1-kube-api-access-twztj\") pod \"redhat-operators-ltxdn\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.192682 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.607080 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltxdn"] Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.997764 4760 generic.go:334] "Generic (PLEG): container finished" podID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerID="7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c" exitCode=0 Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.997836 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjlx" event={"ID":"e1c63720-7aff-4a6a-8127-549ce45ba3e3","Type":"ContainerDied","Data":"7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c"} Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.999208 4760 generic.go:334] "Generic (PLEG): container finished" podID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerID="92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b" exitCode=0 Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.999257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltxdn" event={"ID":"5107cf72-3888-4d18-8afa-9e16fa0427d1","Type":"ContainerDied","Data":"92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b"} Nov 25 08:24:20 crc kubenswrapper[4760]: I1125 08:24:20.999284 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltxdn" event={"ID":"5107cf72-3888-4d18-8afa-9e16fa0427d1","Type":"ContainerStarted","Data":"4b1452dce1bbb64fde6a69e054a6c7c3ff0441430eb654b87592e9803fdebceb"} Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.008029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjlx" event={"ID":"e1c63720-7aff-4a6a-8127-549ce45ba3e3","Type":"ContainerStarted","Data":"448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe"} Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.026403 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6tjlx" podStartSLOduration=2.38013195 podStartE2EDuration="5.026346099s" podCreationTimestamp="2025-11-25 08:24:17 +0000 UTC" firstStartedPulling="2025-11-25 08:24:18.984373275 +0000 UTC m=+792.693404070" lastFinishedPulling="2025-11-25 08:24:21.630587424 +0000 UTC m=+795.339618219" observedRunningTime="2025-11-25 08:24:22.025723022 +0000 UTC m=+795.734753817" watchObservedRunningTime="2025-11-25 08:24:22.026346099 +0000 UTC m=+795.735376895" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.269394 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z6q8q"] Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.270699 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.278914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6q8q"] Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.347902 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-catalog-content\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.347958 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-utilities\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.348019 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bdd\" (UniqueName: \"kubernetes.io/projected/5bc14e67-adcf-40d6-a429-2504db2317d2-kube-api-access-86bdd\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.449777 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-catalog-content\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.449846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-utilities\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.449886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bdd\" (UniqueName: \"kubernetes.io/projected/5bc14e67-adcf-40d6-a429-2504db2317d2-kube-api-access-86bdd\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.450297 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-catalog-content\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.450406 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-utilities\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.475964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bdd\" (UniqueName: \"kubernetes.io/projected/5bc14e67-adcf-40d6-a429-2504db2317d2-kube-api-access-86bdd\") pod \"certified-operators-z6q8q\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:22 crc kubenswrapper[4760]: I1125 08:24:22.584445 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:23 crc kubenswrapper[4760]: I1125 08:24:23.017851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltxdn" event={"ID":"5107cf72-3888-4d18-8afa-9e16fa0427d1","Type":"ContainerStarted","Data":"cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31"} Nov 25 08:24:23 crc kubenswrapper[4760]: I1125 08:24:23.289556 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6q8q"] Nov 25 08:24:23 crc kubenswrapper[4760]: I1125 08:24:23.595287 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:23 crc kubenswrapper[4760]: I1125 08:24:23.596239 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:23 crc kubenswrapper[4760]: I1125 08:24:23.652094 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:24 crc kubenswrapper[4760]: I1125 08:24:24.027043 4760 generic.go:334] "Generic (PLEG): container finished" podID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerID="cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31" exitCode=0 Nov 25 08:24:24 crc kubenswrapper[4760]: I1125 08:24:24.027118 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltxdn" event={"ID":"5107cf72-3888-4d18-8afa-9e16fa0427d1","Type":"ContainerDied","Data":"cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31"} Nov 25 08:24:24 crc kubenswrapper[4760]: I1125 08:24:24.029226 4760 generic.go:334] "Generic (PLEG): container finished" podID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerID="6a98a7ff2fb67050e5f9d9a04aeca1acc60e5c1c1f6198217d2a9e61a2faac5b" exitCode=0 Nov 25 08:24:24 crc kubenswrapper[4760]: I1125 08:24:24.029482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6q8q" event={"ID":"5bc14e67-adcf-40d6-a429-2504db2317d2","Type":"ContainerDied","Data":"6a98a7ff2fb67050e5f9d9a04aeca1acc60e5c1c1f6198217d2a9e61a2faac5b"} Nov 25 08:24:24 crc kubenswrapper[4760]: I1125 08:24:24.029537 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6q8q" event={"ID":"5bc14e67-adcf-40d6-a429-2504db2317d2","Type":"ContainerStarted","Data":"5b3de04eaabf54cece2966f93ab616a77e73f8cb000b3a89fabd8c33849fd7ee"} Nov 25 08:24:24 crc kubenswrapper[4760]: I1125 08:24:24.065821 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-w94z5" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.037320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltxdn" event={"ID":"5107cf72-3888-4d18-8afa-9e16fa0427d1","Type":"ContainerStarted","Data":"4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794"} Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.060819 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ltxdn" podStartSLOduration=2.642080507 podStartE2EDuration="6.060800645s" podCreationTimestamp="2025-11-25 08:24:19 +0000 UTC" firstStartedPulling="2025-11-25 08:24:21.000312687 +0000 UTC m=+794.709343482" lastFinishedPulling="2025-11-25 08:24:24.419032825 +0000 UTC m=+798.128063620" observedRunningTime="2025-11-25 08:24:25.056873281 +0000 UTC m=+798.765904146" watchObservedRunningTime="2025-11-25 08:24:25.060800645 +0000 UTC m=+798.769831430" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.299159 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd"] Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.300349 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.301895 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-774sc" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.308168 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd"] Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.387391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.387453 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnqm\" (UniqueName: \"kubernetes.io/projected/929428c3-d839-4852-af22-badfb25ecbe5-kube-api-access-pjnqm\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.387484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.488621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.488927 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjnqm\" (UniqueName: \"kubernetes.io/projected/929428c3-d839-4852-af22-badfb25ecbe5-kube-api-access-pjnqm\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.488959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.489258 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.489407 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.515384 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjnqm\" (UniqueName: \"kubernetes.io/projected/929428c3-d839-4852-af22-badfb25ecbe5-kube-api-access-pjnqm\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:25 crc kubenswrapper[4760]: I1125 08:24:25.627077 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:26 crc kubenswrapper[4760]: I1125 08:24:26.001420 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd"] Nov 25 08:24:26 crc kubenswrapper[4760]: I1125 08:24:26.044577 4760 generic.go:334] "Generic (PLEG): container finished" podID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerID="77e81114a312d72e86e8c6a5b92fe4f0e57bbc96d36b4848713414f25601fa2b" exitCode=0 Nov 25 08:24:26 crc kubenswrapper[4760]: I1125 08:24:26.044831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6q8q" event={"ID":"5bc14e67-adcf-40d6-a429-2504db2317d2","Type":"ContainerDied","Data":"77e81114a312d72e86e8c6a5b92fe4f0e57bbc96d36b4848713414f25601fa2b"} Nov 25 08:24:26 crc kubenswrapper[4760]: I1125 08:24:26.047736 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" event={"ID":"929428c3-d839-4852-af22-badfb25ecbe5","Type":"ContainerStarted","Data":"d58fed244fd53742cb0f614d68a694977390c810475bf61a55f1c71760bd2f3a"} Nov 25 08:24:27 crc kubenswrapper[4760]: I1125 08:24:27.056123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6q8q" event={"ID":"5bc14e67-adcf-40d6-a429-2504db2317d2","Type":"ContainerStarted","Data":"a520c914e062d789e46b06a5e15b1a877b2a8561bead4de2ebc5f6e5c1d2a4a4"} Nov 25 08:24:27 crc kubenswrapper[4760]: I1125 08:24:27.057923 4760 generic.go:334] "Generic (PLEG): container finished" podID="929428c3-d839-4852-af22-badfb25ecbe5" containerID="485da49dfaa69b48a09fa373522b76a804325e653677bc5720441471910dabf5" exitCode=0 Nov 25 08:24:27 crc kubenswrapper[4760]: I1125 08:24:27.057951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" event={"ID":"929428c3-d839-4852-af22-badfb25ecbe5","Type":"ContainerDied","Data":"485da49dfaa69b48a09fa373522b76a804325e653677bc5720441471910dabf5"} Nov 25 08:24:27 crc kubenswrapper[4760]: I1125 08:24:27.111518 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z6q8q" podStartSLOduration=2.603370846 podStartE2EDuration="5.111500591s" podCreationTimestamp="2025-11-25 08:24:22 +0000 UTC" firstStartedPulling="2025-11-25 08:24:24.030633113 +0000 UTC m=+797.739663908" lastFinishedPulling="2025-11-25 08:24:26.538762858 +0000 UTC m=+800.247793653" observedRunningTime="2025-11-25 08:24:27.077140938 +0000 UTC m=+800.786171733" watchObservedRunningTime="2025-11-25 08:24:27.111500591 +0000 UTC m=+800.820531386" Nov 25 08:24:27 crc kubenswrapper[4760]: I1125 08:24:27.797919 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:27 crc kubenswrapper[4760]: I1125 08:24:27.797978 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:27 crc kubenswrapper[4760]: I1125 08:24:27.848518 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:28 crc kubenswrapper[4760]: I1125 08:24:28.109652 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:29 crc kubenswrapper[4760]: I1125 08:24:29.060352 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tjlx"] Nov 25 08:24:29 crc kubenswrapper[4760]: I1125 08:24:29.071410 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" event={"ID":"929428c3-d839-4852-af22-badfb25ecbe5","Type":"ContainerStarted","Data":"60db55e114ab5853142c1f2a44a99fc91f64a78b6660db6327124e1a4bea6260"} Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.081935 4760 generic.go:334] "Generic (PLEG): container finished" podID="929428c3-d839-4852-af22-badfb25ecbe5" containerID="60db55e114ab5853142c1f2a44a99fc91f64a78b6660db6327124e1a4bea6260" exitCode=0 Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.082012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" event={"ID":"929428c3-d839-4852-af22-badfb25ecbe5","Type":"ContainerDied","Data":"60db55e114ab5853142c1f2a44a99fc91f64a78b6660db6327124e1a4bea6260"} Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.082900 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6tjlx" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="registry-server" containerID="cri-o://448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe" gracePeriod=2 Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.193534 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.193974 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.235596 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.492800 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.551105 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7b7c\" (UniqueName: \"kubernetes.io/projected/e1c63720-7aff-4a6a-8127-549ce45ba3e3-kube-api-access-p7b7c\") pod \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.551225 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-catalog-content\") pod \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.551318 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-utilities\") pod \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\" (UID: \"e1c63720-7aff-4a6a-8127-549ce45ba3e3\") " Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.552120 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-utilities" (OuterVolumeSpecName: "utilities") pod "e1c63720-7aff-4a6a-8127-549ce45ba3e3" (UID: "e1c63720-7aff-4a6a-8127-549ce45ba3e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.555890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c63720-7aff-4a6a-8127-549ce45ba3e3-kube-api-access-p7b7c" (OuterVolumeSpecName: "kube-api-access-p7b7c") pod "e1c63720-7aff-4a6a-8127-549ce45ba3e3" (UID: "e1c63720-7aff-4a6a-8127-549ce45ba3e3"). InnerVolumeSpecName "kube-api-access-p7b7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.652546 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:30 crc kubenswrapper[4760]: I1125 08:24:30.652593 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7b7c\" (UniqueName: \"kubernetes.io/projected/e1c63720-7aff-4a6a-8127-549ce45ba3e3-kube-api-access-p7b7c\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.092108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" event={"ID":"929428c3-d839-4852-af22-badfb25ecbe5","Type":"ContainerStarted","Data":"81933fceb4a0e92bff10ab471719657a89b7150b2e343a4aaa1583ec9e4178c5"} Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.095515 4760 generic.go:334] "Generic (PLEG): container finished" podID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerID="448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe" exitCode=0 Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.095613 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjlx" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.095596 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjlx" event={"ID":"e1c63720-7aff-4a6a-8127-549ce45ba3e3","Type":"ContainerDied","Data":"448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe"} Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.095689 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjlx" event={"ID":"e1c63720-7aff-4a6a-8127-549ce45ba3e3","Type":"ContainerDied","Data":"2d72dd875ee693a450014cf199c7aa6d46f9e48c321fd31b54aca8dbd3378df7"} Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.095720 4760 scope.go:117] "RemoveContainer" containerID="448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.117309 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" podStartSLOduration=4.9420704109999996 podStartE2EDuration="6.117291939s" podCreationTimestamp="2025-11-25 08:24:25 +0000 UTC" firstStartedPulling="2025-11-25 08:24:27.059162598 +0000 UTC m=+800.768193393" lastFinishedPulling="2025-11-25 08:24:28.234384136 +0000 UTC m=+801.943414921" observedRunningTime="2025-11-25 08:24:31.114892819 +0000 UTC m=+804.823923614" watchObservedRunningTime="2025-11-25 08:24:31.117291939 +0000 UTC m=+804.826322724" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.121306 4760 scope.go:117] "RemoveContainer" containerID="7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.142147 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.143582 4760 scope.go:117] "RemoveContainer" containerID="4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.163477 4760 scope.go:117] "RemoveContainer" containerID="448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe" Nov 25 08:24:31 crc kubenswrapper[4760]: E1125 08:24:31.164091 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe\": container with ID starting with 448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe not found: ID does not exist" containerID="448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.164131 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe"} err="failed to get container status \"448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe\": rpc error: code = NotFound desc = could not find container \"448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe\": container with ID starting with 448a23729b84d01129ce5b3a1794059df8b05724a413c50fc095bd3eeed3aefe not found: ID does not exist" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.164162 4760 scope.go:117] "RemoveContainer" containerID="7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c" Nov 25 08:24:31 crc kubenswrapper[4760]: E1125 08:24:31.164525 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c\": container with ID starting with 7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c not found: ID does not exist" containerID="7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.164564 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c"} err="failed to get container status \"7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c\": rpc error: code = NotFound desc = could not find container \"7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c\": container with ID starting with 7f1abfd55f2a992083f4fec6c25d501a7690beacfb0796c504144ea9a2ec982c not found: ID does not exist" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.164587 4760 scope.go:117] "RemoveContainer" containerID="4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520" Nov 25 08:24:31 crc kubenswrapper[4760]: E1125 08:24:31.164905 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520\": container with ID starting with 4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520 not found: ID does not exist" containerID="4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.164936 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520"} err="failed to get container status \"4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520\": rpc error: code = NotFound desc = could not find container \"4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520\": container with ID starting with 4668b2280222ec313cebbf24d1f54388b2286e8a69729351b753840dbd842520 not found: ID does not exist" Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.746064 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:24:31 crc kubenswrapper[4760]: I1125 08:24:31.746123 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.105394 4760 generic.go:334] "Generic (PLEG): container finished" podID="929428c3-d839-4852-af22-badfb25ecbe5" containerID="81933fceb4a0e92bff10ab471719657a89b7150b2e343a4aaa1583ec9e4178c5" exitCode=0 Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.105471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" event={"ID":"929428c3-d839-4852-af22-badfb25ecbe5","Type":"ContainerDied","Data":"81933fceb4a0e92bff10ab471719657a89b7150b2e343a4aaa1583ec9e4178c5"} Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.153132 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1c63720-7aff-4a6a-8127-549ce45ba3e3" (UID: "e1c63720-7aff-4a6a-8127-549ce45ba3e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.169702 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c63720-7aff-4a6a-8127-549ce45ba3e3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.325770 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tjlx"] Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.329284 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6tjlx"] Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.584679 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.584729 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.623958 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:32 crc kubenswrapper[4760]: I1125 08:24:32.946939 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" path="/var/lib/kubelet/pods/e1c63720-7aff-4a6a-8127-549ce45ba3e3/volumes" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.164619 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.389289 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.488647 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-bundle\") pod \"929428c3-d839-4852-af22-badfb25ecbe5\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.488847 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjnqm\" (UniqueName: \"kubernetes.io/projected/929428c3-d839-4852-af22-badfb25ecbe5-kube-api-access-pjnqm\") pod \"929428c3-d839-4852-af22-badfb25ecbe5\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.488913 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-util\") pod \"929428c3-d839-4852-af22-badfb25ecbe5\" (UID: \"929428c3-d839-4852-af22-badfb25ecbe5\") " Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.489859 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-bundle" (OuterVolumeSpecName: "bundle") pod "929428c3-d839-4852-af22-badfb25ecbe5" (UID: "929428c3-d839-4852-af22-badfb25ecbe5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.494318 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929428c3-d839-4852-af22-badfb25ecbe5-kube-api-access-pjnqm" (OuterVolumeSpecName: "kube-api-access-pjnqm") pod "929428c3-d839-4852-af22-badfb25ecbe5" (UID: "929428c3-d839-4852-af22-badfb25ecbe5"). InnerVolumeSpecName "kube-api-access-pjnqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.500879 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-util" (OuterVolumeSpecName: "util") pod "929428c3-d839-4852-af22-badfb25ecbe5" (UID: "929428c3-d839-4852-af22-badfb25ecbe5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.591118 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.591163 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjnqm\" (UniqueName: \"kubernetes.io/projected/929428c3-d839-4852-af22-badfb25ecbe5-kube-api-access-pjnqm\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:33 crc kubenswrapper[4760]: I1125 08:24:33.591181 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929428c3-d839-4852-af22-badfb25ecbe5-util\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:34 crc kubenswrapper[4760]: I1125 08:24:34.127374 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" Nov 25 08:24:34 crc kubenswrapper[4760]: I1125 08:24:34.127357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd" event={"ID":"929428c3-d839-4852-af22-badfb25ecbe5","Type":"ContainerDied","Data":"d58fed244fd53742cb0f614d68a694977390c810475bf61a55f1c71760bd2f3a"} Nov 25 08:24:34 crc kubenswrapper[4760]: I1125 08:24:34.127787 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58fed244fd53742cb0f614d68a694977390c810475bf61a55f1c71760bd2f3a" Nov 25 08:24:35 crc kubenswrapper[4760]: I1125 08:24:35.663985 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltxdn"] Nov 25 08:24:35 crc kubenswrapper[4760]: I1125 08:24:35.664221 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ltxdn" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="registry-server" containerID="cri-o://4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794" gracePeriod=2 Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.076234 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.119762 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-catalog-content\") pod \"5107cf72-3888-4d18-8afa-9e16fa0427d1\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.119850 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twztj\" (UniqueName: \"kubernetes.io/projected/5107cf72-3888-4d18-8afa-9e16fa0427d1-kube-api-access-twztj\") pod \"5107cf72-3888-4d18-8afa-9e16fa0427d1\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.119892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-utilities\") pod \"5107cf72-3888-4d18-8afa-9e16fa0427d1\" (UID: \"5107cf72-3888-4d18-8afa-9e16fa0427d1\") " Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.120816 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-utilities" (OuterVolumeSpecName: "utilities") pod "5107cf72-3888-4d18-8afa-9e16fa0427d1" (UID: "5107cf72-3888-4d18-8afa-9e16fa0427d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.131474 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5107cf72-3888-4d18-8afa-9e16fa0427d1-kube-api-access-twztj" (OuterVolumeSpecName: "kube-api-access-twztj") pod "5107cf72-3888-4d18-8afa-9e16fa0427d1" (UID: "5107cf72-3888-4d18-8afa-9e16fa0427d1"). InnerVolumeSpecName "kube-api-access-twztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.141906 4760 generic.go:334] "Generic (PLEG): container finished" podID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerID="4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794" exitCode=0 Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.141959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltxdn" event={"ID":"5107cf72-3888-4d18-8afa-9e16fa0427d1","Type":"ContainerDied","Data":"4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794"} Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.141976 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltxdn" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.141993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltxdn" event={"ID":"5107cf72-3888-4d18-8afa-9e16fa0427d1","Type":"ContainerDied","Data":"4b1452dce1bbb64fde6a69e054a6c7c3ff0441430eb654b87592e9803fdebceb"} Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.142019 4760 scope.go:117] "RemoveContainer" containerID="4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.173853 4760 scope.go:117] "RemoveContainer" containerID="cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.193822 4760 scope.go:117] "RemoveContainer" containerID="92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.211587 4760 scope.go:117] "RemoveContainer" containerID="4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.212063 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794\": container with ID starting with 4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794 not found: ID does not exist" containerID="4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.212101 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794"} err="failed to get container status \"4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794\": rpc error: code = NotFound desc = could not find container \"4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794\": container with ID starting with 4868b1a86019b1e111aeb54087bee877f15b558fd495e7d39e88db64fa703794 not found: ID does not exist" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.212122 4760 scope.go:117] "RemoveContainer" containerID="cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.212440 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31\": container with ID starting with cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31 not found: ID does not exist" containerID="cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.212466 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31"} err="failed to get container status \"cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31\": rpc error: code = NotFound desc = could not find container \"cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31\": container with ID starting with cedff5e930246e6c16fba20fd85fd012af0f8a38ebf26092c98bcab114ceed31 not found: ID does not exist" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.212480 4760 scope.go:117] "RemoveContainer" containerID="92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.212663 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b\": container with ID starting with 92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b not found: ID does not exist" containerID="92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.212689 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b"} err="failed to get container status \"92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b\": rpc error: code = NotFound desc = could not find container \"92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b\": container with ID starting with 92a7fe579adbde2930511c475df59cddfd147234c431c3c1d2830a2f5478a55b not found: ID does not exist" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.220951 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twztj\" (UniqueName: \"kubernetes.io/projected/5107cf72-3888-4d18-8afa-9e16fa0427d1-kube-api-access-twztj\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.220982 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.226763 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5107cf72-3888-4d18-8afa-9e16fa0427d1" (UID: "5107cf72-3888-4d18-8afa-9e16fa0427d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.321605 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5107cf72-3888-4d18-8afa-9e16fa0427d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.368691 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms"] Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369076 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="extract-content" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369097 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="extract-content" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369108 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929428c3-d839-4852-af22-badfb25ecbe5" containerName="util" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369115 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="929428c3-d839-4852-af22-badfb25ecbe5" containerName="util" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369128 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929428c3-d839-4852-af22-badfb25ecbe5" containerName="extract" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369136 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="929428c3-d839-4852-af22-badfb25ecbe5" containerName="extract" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369151 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="extract-utilities" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369163 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="extract-utilities" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369189 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="extract-content" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369198 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="extract-content" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369212 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="extract-utilities" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369219 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="extract-utilities" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369226 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929428c3-d839-4852-af22-badfb25ecbe5" containerName="pull" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369235 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="929428c3-d839-4852-af22-badfb25ecbe5" containerName="pull" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369263 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="registry-server" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369275 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="registry-server" Nov 25 08:24:36 crc kubenswrapper[4760]: E1125 08:24:36.369283 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="registry-server" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369290 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="registry-server" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369464 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c63720-7aff-4a6a-8127-549ce45ba3e3" containerName="registry-server" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369483 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" containerName="registry-server" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.369496 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="929428c3-d839-4852-af22-badfb25ecbe5" containerName="extract" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.370078 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.372452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-7dknh" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.403086 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms"] Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.472002 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltxdn"] Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.475481 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ltxdn"] Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.524242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzcnw\" (UniqueName: \"kubernetes.io/projected/2a8a302a-2ee0-4717-9558-74db40b7dfb1-kube-api-access-pzcnw\") pod \"openstack-operator-controller-operator-7b567956b5-4z6ms\" (UID: \"2a8a302a-2ee0-4717-9558-74db40b7dfb1\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.625210 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzcnw\" (UniqueName: \"kubernetes.io/projected/2a8a302a-2ee0-4717-9558-74db40b7dfb1-kube-api-access-pzcnw\") pod \"openstack-operator-controller-operator-7b567956b5-4z6ms\" (UID: \"2a8a302a-2ee0-4717-9558-74db40b7dfb1\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.643705 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzcnw\" (UniqueName: \"kubernetes.io/projected/2a8a302a-2ee0-4717-9558-74db40b7dfb1-kube-api-access-pzcnw\") pod \"openstack-operator-controller-operator-7b567956b5-4z6ms\" (UID: \"2a8a302a-2ee0-4717-9558-74db40b7dfb1\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.686193 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 08:24:36 crc kubenswrapper[4760]: I1125 08:24:36.954693 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5107cf72-3888-4d18-8afa-9e16fa0427d1" path="/var/lib/kubelet/pods/5107cf72-3888-4d18-8afa-9e16fa0427d1/volumes" Nov 25 08:24:37 crc kubenswrapper[4760]: I1125 08:24:37.286054 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms"] Nov 25 08:24:38 crc kubenswrapper[4760]: I1125 08:24:38.161071 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" event={"ID":"2a8a302a-2ee0-4717-9558-74db40b7dfb1","Type":"ContainerStarted","Data":"e7391f3e3ecedffbda3fdc90b941dbbad3fb1ed16c0103853c79b52a61284126"} Nov 25 08:24:38 crc kubenswrapper[4760]: I1125 08:24:38.864191 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6q8q"] Nov 25 08:24:38 crc kubenswrapper[4760]: I1125 08:24:38.864974 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z6q8q" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="registry-server" containerID="cri-o://a520c914e062d789e46b06a5e15b1a877b2a8561bead4de2ebc5f6e5c1d2a4a4" gracePeriod=2 Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.187871 4760 generic.go:334] "Generic (PLEG): container finished" podID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerID="a520c914e062d789e46b06a5e15b1a877b2a8561bead4de2ebc5f6e5c1d2a4a4" exitCode=0 Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.187917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6q8q" event={"ID":"5bc14e67-adcf-40d6-a429-2504db2317d2","Type":"ContainerDied","Data":"a520c914e062d789e46b06a5e15b1a877b2a8561bead4de2ebc5f6e5c1d2a4a4"} Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.296967 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.483352 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-utilities\") pod \"5bc14e67-adcf-40d6-a429-2504db2317d2\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.483489 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-catalog-content\") pod \"5bc14e67-adcf-40d6-a429-2504db2317d2\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.483535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bdd\" (UniqueName: \"kubernetes.io/projected/5bc14e67-adcf-40d6-a429-2504db2317d2-kube-api-access-86bdd\") pod \"5bc14e67-adcf-40d6-a429-2504db2317d2\" (UID: \"5bc14e67-adcf-40d6-a429-2504db2317d2\") " Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.484109 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-utilities" (OuterVolumeSpecName: "utilities") pod "5bc14e67-adcf-40d6-a429-2504db2317d2" (UID: "5bc14e67-adcf-40d6-a429-2504db2317d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.510684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc14e67-adcf-40d6-a429-2504db2317d2-kube-api-access-86bdd" (OuterVolumeSpecName: "kube-api-access-86bdd") pod "5bc14e67-adcf-40d6-a429-2504db2317d2" (UID: "5bc14e67-adcf-40d6-a429-2504db2317d2"). InnerVolumeSpecName "kube-api-access-86bdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.551660 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bc14e67-adcf-40d6-a429-2504db2317d2" (UID: "5bc14e67-adcf-40d6-a429-2504db2317d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.584490 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.584536 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bc14e67-adcf-40d6-a429-2504db2317d2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:39 crc kubenswrapper[4760]: I1125 08:24:39.584553 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bdd\" (UniqueName: \"kubernetes.io/projected/5bc14e67-adcf-40d6-a429-2504db2317d2-kube-api-access-86bdd\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:40 crc kubenswrapper[4760]: I1125 08:24:40.199074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6q8q" event={"ID":"5bc14e67-adcf-40d6-a429-2504db2317d2","Type":"ContainerDied","Data":"5b3de04eaabf54cece2966f93ab616a77e73f8cb000b3a89fabd8c33849fd7ee"} Nov 25 08:24:40 crc kubenswrapper[4760]: I1125 08:24:40.199130 4760 scope.go:117] "RemoveContainer" containerID="a520c914e062d789e46b06a5e15b1a877b2a8561bead4de2ebc5f6e5c1d2a4a4" Nov 25 08:24:40 crc kubenswrapper[4760]: I1125 08:24:40.199263 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6q8q" Nov 25 08:24:40 crc kubenswrapper[4760]: I1125 08:24:40.233342 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6q8q"] Nov 25 08:24:40 crc kubenswrapper[4760]: I1125 08:24:40.238270 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z6q8q"] Nov 25 08:24:40 crc kubenswrapper[4760]: I1125 08:24:40.945308 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" path="/var/lib/kubelet/pods/5bc14e67-adcf-40d6-a429-2504db2317d2/volumes" Nov 25 08:24:41 crc kubenswrapper[4760]: I1125 08:24:41.954379 4760 scope.go:117] "RemoveContainer" containerID="77e81114a312d72e86e8c6a5b92fe4f0e57bbc96d36b4848713414f25601fa2b" Nov 25 08:24:41 crc kubenswrapper[4760]: I1125 08:24:41.985888 4760 scope.go:117] "RemoveContainer" containerID="6a98a7ff2fb67050e5f9d9a04aeca1acc60e5c1c1f6198217d2a9e61a2faac5b" Nov 25 08:24:42 crc kubenswrapper[4760]: I1125 08:24:42.226946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" event={"ID":"2a8a302a-2ee0-4717-9558-74db40b7dfb1","Type":"ContainerStarted","Data":"aa52c6121ea7e90ab8a20330daec615c4f9b8803b2313580748d76965c14b5c7"} Nov 25 08:24:42 crc kubenswrapper[4760]: I1125 08:24:42.227285 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 08:24:42 crc kubenswrapper[4760]: I1125 08:24:42.263684 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" podStartSLOduration=1.54816406 podStartE2EDuration="6.263662552s" podCreationTimestamp="2025-11-25 08:24:36 +0000 UTC" firstStartedPulling="2025-11-25 08:24:37.289656764 +0000 UTC m=+810.998687559" lastFinishedPulling="2025-11-25 08:24:42.005155256 +0000 UTC m=+815.714186051" observedRunningTime="2025-11-25 08:24:42.256217777 +0000 UTC m=+815.965248582" watchObservedRunningTime="2025-11-25 08:24:42.263662552 +0000 UTC m=+815.972693347" Nov 25 08:24:46 crc kubenswrapper[4760]: I1125 08:24:46.889198 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d5n48"] Nov 25 08:24:46 crc kubenswrapper[4760]: E1125 08:24:46.890036 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="extract-content" Nov 25 08:24:46 crc kubenswrapper[4760]: I1125 08:24:46.890052 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="extract-content" Nov 25 08:24:46 crc kubenswrapper[4760]: E1125 08:24:46.890074 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="registry-server" Nov 25 08:24:46 crc kubenswrapper[4760]: I1125 08:24:46.890083 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="registry-server" Nov 25 08:24:46 crc kubenswrapper[4760]: E1125 08:24:46.890092 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="extract-utilities" Nov 25 08:24:46 crc kubenswrapper[4760]: I1125 08:24:46.890100 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="extract-utilities" Nov 25 08:24:46 crc kubenswrapper[4760]: I1125 08:24:46.890239 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc14e67-adcf-40d6-a429-2504db2317d2" containerName="registry-server" Nov 25 08:24:46 crc kubenswrapper[4760]: I1125 08:24:46.891504 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:46 crc kubenswrapper[4760]: I1125 08:24:46.909789 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5n48"] Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.084820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-utilities\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.084914 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-catalog-content\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.085438 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8ft\" (UniqueName: \"kubernetes.io/projected/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-kube-api-access-4m8ft\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.186047 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-catalog-content\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.186185 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8ft\" (UniqueName: \"kubernetes.io/projected/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-kube-api-access-4m8ft\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.186210 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-utilities\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.186608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-catalog-content\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.186614 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-utilities\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.207220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8ft\" (UniqueName: \"kubernetes.io/projected/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-kube-api-access-4m8ft\") pod \"redhat-marketplace-d5n48\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.210293 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:47 crc kubenswrapper[4760]: I1125 08:24:47.630282 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5n48"] Nov 25 08:24:48 crc kubenswrapper[4760]: I1125 08:24:48.264876 4760 generic.go:334] "Generic (PLEG): container finished" podID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerID="9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30" exitCode=0 Nov 25 08:24:48 crc kubenswrapper[4760]: I1125 08:24:48.264912 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5n48" event={"ID":"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b","Type":"ContainerDied","Data":"9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30"} Nov 25 08:24:48 crc kubenswrapper[4760]: I1125 08:24:48.265152 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5n48" event={"ID":"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b","Type":"ContainerStarted","Data":"86dcd9cd55694e70069f03aa207244a71d7b5ea31faef4939945aa6d716181b7"} Nov 25 08:24:49 crc kubenswrapper[4760]: I1125 08:24:49.278685 4760 generic.go:334] "Generic (PLEG): container finished" podID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerID="3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6" exitCode=0 Nov 25 08:24:49 crc kubenswrapper[4760]: I1125 08:24:49.278790 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5n48" event={"ID":"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b","Type":"ContainerDied","Data":"3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6"} Nov 25 08:24:50 crc kubenswrapper[4760]: I1125 08:24:50.287056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5n48" event={"ID":"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b","Type":"ContainerStarted","Data":"33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb"} Nov 25 08:24:50 crc kubenswrapper[4760]: I1125 08:24:50.304546 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d5n48" podStartSLOduration=2.912000851 podStartE2EDuration="4.304517062s" podCreationTimestamp="2025-11-25 08:24:46 +0000 UTC" firstStartedPulling="2025-11-25 08:24:48.266472993 +0000 UTC m=+821.975503788" lastFinishedPulling="2025-11-25 08:24:49.658989194 +0000 UTC m=+823.368019999" observedRunningTime="2025-11-25 08:24:50.303548664 +0000 UTC m=+824.012579459" watchObservedRunningTime="2025-11-25 08:24:50.304517062 +0000 UTC m=+824.013547857" Nov 25 08:24:56 crc kubenswrapper[4760]: I1125 08:24:56.691682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 08:24:57 crc kubenswrapper[4760]: I1125 08:24:57.210709 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:57 crc kubenswrapper[4760]: I1125 08:24:57.210781 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:57 crc kubenswrapper[4760]: I1125 08:24:57.260641 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:57 crc kubenswrapper[4760]: I1125 08:24:57.368494 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:24:59 crc kubenswrapper[4760]: I1125 08:24:59.661076 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5n48"] Nov 25 08:24:59 crc kubenswrapper[4760]: I1125 08:24:59.661361 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d5n48" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="registry-server" containerID="cri-o://33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb" gracePeriod=2 Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.053601 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.148650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-catalog-content\") pod \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.148706 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-utilities\") pod \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.150027 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-utilities" (OuterVolumeSpecName: "utilities") pod "41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" (UID: "41c2d2f8-9685-47a3-8bb1-4b088dbdd79b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.165484 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" (UID: "41c2d2f8-9685-47a3-8bb1-4b088dbdd79b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.250079 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m8ft\" (UniqueName: \"kubernetes.io/projected/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-kube-api-access-4m8ft\") pod \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\" (UID: \"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b\") " Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.250391 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.250409 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.256524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-kube-api-access-4m8ft" (OuterVolumeSpecName: "kube-api-access-4m8ft") pod "41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" (UID: "41c2d2f8-9685-47a3-8bb1-4b088dbdd79b"). InnerVolumeSpecName "kube-api-access-4m8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.347350 4760 generic.go:334] "Generic (PLEG): container finished" podID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerID="33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb" exitCode=0 Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.347396 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5n48" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.347399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5n48" event={"ID":"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b","Type":"ContainerDied","Data":"33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb"} Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.347440 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5n48" event={"ID":"41c2d2f8-9685-47a3-8bb1-4b088dbdd79b","Type":"ContainerDied","Data":"86dcd9cd55694e70069f03aa207244a71d7b5ea31faef4939945aa6d716181b7"} Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.347457 4760 scope.go:117] "RemoveContainer" containerID="33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.351635 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m8ft\" (UniqueName: \"kubernetes.io/projected/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b-kube-api-access-4m8ft\") on node \"crc\" DevicePath \"\"" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.367454 4760 scope.go:117] "RemoveContainer" containerID="3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.375153 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5n48"] Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.381371 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5n48"] Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.414957 4760 scope.go:117] "RemoveContainer" containerID="9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.430852 4760 scope.go:117] "RemoveContainer" containerID="33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb" Nov 25 08:25:00 crc kubenswrapper[4760]: E1125 08:25:00.432051 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb\": container with ID starting with 33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb not found: ID does not exist" containerID="33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.432081 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb"} err="failed to get container status \"33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb\": rpc error: code = NotFound desc = could not find container \"33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb\": container with ID starting with 33c78dcdec3d0b7f8451dbdf2ac7bb4d76055ecc37405f2fd798649cf1c4b4bb not found: ID does not exist" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.432107 4760 scope.go:117] "RemoveContainer" containerID="3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6" Nov 25 08:25:00 crc kubenswrapper[4760]: E1125 08:25:00.433040 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6\": container with ID starting with 3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6 not found: ID does not exist" containerID="3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.433062 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6"} err="failed to get container status \"3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6\": rpc error: code = NotFound desc = could not find container \"3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6\": container with ID starting with 3b4ab3987243e27395d4f73c4177bfc37dfb26ed79a961669d016405f651fea6 not found: ID does not exist" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.433075 4760 scope.go:117] "RemoveContainer" containerID="9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30" Nov 25 08:25:00 crc kubenswrapper[4760]: E1125 08:25:00.433353 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30\": container with ID starting with 9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30 not found: ID does not exist" containerID="9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.433381 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30"} err="failed to get container status \"9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30\": rpc error: code = NotFound desc = could not find container \"9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30\": container with ID starting with 9ba88e7f41d785e79a7c979b618d2de19d0560c9e08b369de1855788e52c4c30 not found: ID does not exist" Nov 25 08:25:00 crc kubenswrapper[4760]: I1125 08:25:00.945088 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" path="/var/lib/kubelet/pods/41c2d2f8-9685-47a3-8bb1-4b088dbdd79b/volumes" Nov 25 08:25:01 crc kubenswrapper[4760]: I1125 08:25:01.746792 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:25:01 crc kubenswrapper[4760]: I1125 08:25:01.746853 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:25:01 crc kubenswrapper[4760]: I1125 08:25:01.746920 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:25:01 crc kubenswrapper[4760]: I1125 08:25:01.747663 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b1cf405379b8f080f8ca00a8aea4c263e37ea8900c6a162c41370800ee44d84"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:25:01 crc kubenswrapper[4760]: I1125 08:25:01.747730 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://1b1cf405379b8f080f8ca00a8aea4c263e37ea8900c6a162c41370800ee44d84" gracePeriod=600 Nov 25 08:25:02 crc kubenswrapper[4760]: I1125 08:25:02.360684 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="1b1cf405379b8f080f8ca00a8aea4c263e37ea8900c6a162c41370800ee44d84" exitCode=0 Nov 25 08:25:02 crc kubenswrapper[4760]: I1125 08:25:02.360733 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"1b1cf405379b8f080f8ca00a8aea4c263e37ea8900c6a162c41370800ee44d84"} Nov 25 08:25:02 crc kubenswrapper[4760]: I1125 08:25:02.361091 4760 scope.go:117] "RemoveContainer" containerID="8ea91d6699ab5d174bc8311b29a2b59a97368218bd86cb03b23aecea38616074" Nov 25 08:25:03 crc kubenswrapper[4760]: I1125 08:25:03.369654 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"b9e0ecc3c247b6af19eb122bc74a94901ef917b6bb9d5aef56c5a3aafb61bcb8"} Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.537771 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf"] Nov 25 08:25:13 crc kubenswrapper[4760]: E1125 08:25:13.538518 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="registry-server" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.538530 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="registry-server" Nov 25 08:25:13 crc kubenswrapper[4760]: E1125 08:25:13.538545 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="extract-content" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.538551 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="extract-content" Nov 25 08:25:13 crc kubenswrapper[4760]: E1125 08:25:13.538563 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="extract-utilities" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.538571 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="extract-utilities" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.538691 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c2d2f8-9685-47a3-8bb1-4b088dbdd79b" containerName="registry-server" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.539333 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.543499 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jpfdb" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.543784 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.544698 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.546424 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cbzzc" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.550548 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.556330 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.578318 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.579545 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.581329 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dnpxg" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.595224 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.624366 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.626190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.629773 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-shlwm" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.632303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.640808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl48h\" (UniqueName: \"kubernetes.io/projected/97e97ce2-b50b-478e-acb2-cbdd5232d67c-kube-api-access-wl48h\") pod \"barbican-operator-controller-manager-86dc4d89c8-hlbbf\" (UID: \"97e97ce2-b50b-478e-acb2-cbdd5232d67c\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.640880 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8rn\" (UniqueName: \"kubernetes.io/projected/03a9ee81-2733-444d-8edc-ddb1303b5686-kube-api-access-9l8rn\") pod \"cinder-operator-controller-manager-79856dc55c-k4dk2\" (UID: \"03a9ee81-2733-444d-8edc-ddb1303b5686\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.640906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqjk\" (UniqueName: \"kubernetes.io/projected/f531ae0e-78ad-4d2c-951f-0d1f7d1c8129-kube-api-access-khqjk\") pod \"designate-operator-controller-manager-7d695c9b56-xghfv\" (UID: \"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.640966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pg8\" (UniqueName: \"kubernetes.io/projected/25f372bf-e250-492b-abb9-680b1efdbdec-kube-api-access-p4pg8\") pod \"glance-operator-controller-manager-68b95954c9-6cjlz\" (UID: \"25f372bf-e250-492b-abb9-680b1efdbdec\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.646117 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.647178 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.651114 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4sqtg" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.660315 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-l24ns"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.661447 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.668615 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-h2n7f" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.676752 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.685899 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.687121 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.698677 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.699002 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wgbnv" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.699122 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-l24ns"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.703241 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.724436 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.725506 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.726396 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.727056 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.734682 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c7bkp" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.742429 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-t29h5" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4pg8\" (UniqueName: \"kubernetes.io/projected/25f372bf-e250-492b-abb9-680b1efdbdec-kube-api-access-p4pg8\") pod \"glance-operator-controller-manager-68b95954c9-6cjlz\" (UID: \"25f372bf-e250-492b-abb9-680b1efdbdec\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl48h\" (UniqueName: \"kubernetes.io/projected/97e97ce2-b50b-478e-acb2-cbdd5232d67c-kube-api-access-wl48h\") pod \"barbican-operator-controller-manager-86dc4d89c8-hlbbf\" (UID: \"97e97ce2-b50b-478e-acb2-cbdd5232d67c\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33faed21-8b19-4064-a6e2-5064ce8cbab2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-njfjf\" (UID: \"33faed21-8b19-4064-a6e2-5064ce8cbab2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nk86\" (UniqueName: \"kubernetes.io/projected/1d556614-e3c1-4834-919a-0c6f5f5cc4de-kube-api-access-9nk86\") pod \"keystone-operator-controller-manager-748dc6576f-kw54v\" (UID: \"1d556614-e3c1-4834-919a-0c6f5f5cc4de\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743510 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dc9\" (UniqueName: \"kubernetes.io/projected/33faed21-8b19-4064-a6e2-5064ce8cbab2-kube-api-access-x6dc9\") pod \"infra-operator-controller-manager-d5cc86f4b-njfjf\" (UID: \"33faed21-8b19-4064-a6e2-5064ce8cbab2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8rn\" (UniqueName: \"kubernetes.io/projected/03a9ee81-2733-444d-8edc-ddb1303b5686-kube-api-access-9l8rn\") pod \"cinder-operator-controller-manager-79856dc55c-k4dk2\" (UID: \"03a9ee81-2733-444d-8edc-ddb1303b5686\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khqjk\" (UniqueName: \"kubernetes.io/projected/f531ae0e-78ad-4d2c-951f-0d1f7d1c8129-kube-api-access-khqjk\") pod \"designate-operator-controller-manager-7d695c9b56-xghfv\" (UID: \"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw26h\" (UniqueName: \"kubernetes.io/projected/6dde35ac-ff01-4e46-9eae-234e6abc37dc-kube-api-access-hw26h\") pod \"ironic-operator-controller-manager-5bfcdc958c-x7r44\" (UID: \"6dde35ac-ff01-4e46-9eae-234e6abc37dc\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5x2\" (UniqueName: \"kubernetes.io/projected/890067e5-2be8-4699-8d90-f2771ef453e5-kube-api-access-4m5x2\") pod \"horizon-operator-controller-manager-68c9694994-l28cr\" (UID: \"890067e5-2be8-4699-8d90-f2771ef453e5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.743658 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98f7g\" (UniqueName: \"kubernetes.io/projected/b4325bd6-c276-4fbc-bc67-cf5a026c3537-kube-api-access-98f7g\") pod \"heat-operator-controller-manager-774b86978c-l24ns\" (UID: \"b4325bd6-c276-4fbc-bc67-cf5a026c3537\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.764303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.815837 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl48h\" (UniqueName: \"kubernetes.io/projected/97e97ce2-b50b-478e-acb2-cbdd5232d67c-kube-api-access-wl48h\") pod \"barbican-operator-controller-manager-86dc4d89c8-hlbbf\" (UID: \"97e97ce2-b50b-478e-acb2-cbdd5232d67c\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.815997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4pg8\" (UniqueName: \"kubernetes.io/projected/25f372bf-e250-492b-abb9-680b1efdbdec-kube-api-access-p4pg8\") pod \"glance-operator-controller-manager-68b95954c9-6cjlz\" (UID: \"25f372bf-e250-492b-abb9-680b1efdbdec\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.842096 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.847021 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33faed21-8b19-4064-a6e2-5064ce8cbab2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-njfjf\" (UID: \"33faed21-8b19-4064-a6e2-5064ce8cbab2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.847073 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nk86\" (UniqueName: \"kubernetes.io/projected/1d556614-e3c1-4834-919a-0c6f5f5cc4de-kube-api-access-9nk86\") pod \"keystone-operator-controller-manager-748dc6576f-kw54v\" (UID: \"1d556614-e3c1-4834-919a-0c6f5f5cc4de\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.847105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dc9\" (UniqueName: \"kubernetes.io/projected/33faed21-8b19-4064-a6e2-5064ce8cbab2-kube-api-access-x6dc9\") pod \"infra-operator-controller-manager-d5cc86f4b-njfjf\" (UID: \"33faed21-8b19-4064-a6e2-5064ce8cbab2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.847133 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw26h\" (UniqueName: \"kubernetes.io/projected/6dde35ac-ff01-4e46-9eae-234e6abc37dc-kube-api-access-hw26h\") pod \"ironic-operator-controller-manager-5bfcdc958c-x7r44\" (UID: \"6dde35ac-ff01-4e46-9eae-234e6abc37dc\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.847156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m5x2\" (UniqueName: \"kubernetes.io/projected/890067e5-2be8-4699-8d90-f2771ef453e5-kube-api-access-4m5x2\") pod \"horizon-operator-controller-manager-68c9694994-l28cr\" (UID: \"890067e5-2be8-4699-8d90-f2771ef453e5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.847196 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98f7g\" (UniqueName: \"kubernetes.io/projected/b4325bd6-c276-4fbc-bc67-cf5a026c3537-kube-api-access-98f7g\") pod \"heat-operator-controller-manager-774b86978c-l24ns\" (UID: \"b4325bd6-c276-4fbc-bc67-cf5a026c3537\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 08:25:13 crc kubenswrapper[4760]: E1125 08:25:13.847740 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 08:25:13 crc kubenswrapper[4760]: E1125 08:25:13.847807 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33faed21-8b19-4064-a6e2-5064ce8cbab2-cert podName:33faed21-8b19-4064-a6e2-5064ce8cbab2 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:14.347787329 +0000 UTC m=+848.056818124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33faed21-8b19-4064-a6e2-5064ce8cbab2-cert") pod "infra-operator-controller-manager-d5cc86f4b-njfjf" (UID: "33faed21-8b19-4064-a6e2-5064ce8cbab2") : secret "infra-operator-webhook-server-cert" not found Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.848620 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khqjk\" (UniqueName: \"kubernetes.io/projected/f531ae0e-78ad-4d2c-951f-0d1f7d1c8129-kube-api-access-khqjk\") pod \"designate-operator-controller-manager-7d695c9b56-xghfv\" (UID: \"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.856787 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.865971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8rn\" (UniqueName: \"kubernetes.io/projected/03a9ee81-2733-444d-8edc-ddb1303b5686-kube-api-access-9l8rn\") pod \"cinder-operator-controller-manager-79856dc55c-k4dk2\" (UID: \"03a9ee81-2733-444d-8edc-ddb1303b5686\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.873921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98f7g\" (UniqueName: \"kubernetes.io/projected/b4325bd6-c276-4fbc-bc67-cf5a026c3537-kube-api-access-98f7g\") pod \"heat-operator-controller-manager-774b86978c-l24ns\" (UID: \"b4325bd6-c276-4fbc-bc67-cf5a026c3537\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.875753 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.899582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dc9\" (UniqueName: \"kubernetes.io/projected/33faed21-8b19-4064-a6e2-5064ce8cbab2-kube-api-access-x6dc9\") pod \"infra-operator-controller-manager-d5cc86f4b-njfjf\" (UID: \"33faed21-8b19-4064-a6e2-5064ce8cbab2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.899886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.911894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m5x2\" (UniqueName: \"kubernetes.io/projected/890067e5-2be8-4699-8d90-f2771ef453e5-kube-api-access-4m5x2\") pod \"horizon-operator-controller-manager-68c9694994-l28cr\" (UID: \"890067e5-2be8-4699-8d90-f2771ef453e5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.920362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nk86\" (UniqueName: \"kubernetes.io/projected/1d556614-e3c1-4834-919a-0c6f5f5cc4de-kube-api-access-9nk86\") pod \"keystone-operator-controller-manager-748dc6576f-kw54v\" (UID: \"1d556614-e3c1-4834-919a-0c6f5f5cc4de\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.923784 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw26h\" (UniqueName: \"kubernetes.io/projected/6dde35ac-ff01-4e46-9eae-234e6abc37dc-kube-api-access-hw26h\") pod \"ironic-operator-controller-manager-5bfcdc958c-x7r44\" (UID: \"6dde35ac-ff01-4e46-9eae-234e6abc37dc\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.938013 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.942181 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.951688 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.961055 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rcnq4" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.991367 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64"] Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.993454 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 08:25:13 crc kubenswrapper[4760]: I1125 08:25:13.994023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.020515 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.021902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.025739 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6tpnz" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.032738 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.057225 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzqq\" (UniqueName: \"kubernetes.io/projected/f0f31412-34be-4b9d-8df1-b53d23abb1f6-kube-api-access-qgzqq\") pod \"manila-operator-controller-manager-58bb8d67cc-s4q64\" (UID: \"f0f31412-34be-4b9d-8df1-b53d23abb1f6\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.057836 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.066370 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.073919 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.075282 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.098120 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bwhbm" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.119322 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.120895 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.131275 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jr5wk" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.158791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rftlz\" (UniqueName: \"kubernetes.io/projected/002e6b13-60c5-484c-8116-b4d5241ed678-kube-api-access-rftlz\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-54bpm\" (UID: \"002e6b13-60c5-484c-8116-b4d5241ed678\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.158862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgzqq\" (UniqueName: \"kubernetes.io/projected/f0f31412-34be-4b9d-8df1-b53d23abb1f6-kube-api-access-qgzqq\") pod \"manila-operator-controller-manager-58bb8d67cc-s4q64\" (UID: \"f0f31412-34be-4b9d-8df1-b53d23abb1f6\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.192990 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.260451 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595q6\" (UniqueName: \"kubernetes.io/projected/9291524e-d650-4366-b795-162d53bf2815-kube-api-access-595q6\") pod \"neutron-operator-controller-manager-7c57c8bbc4-l7cv5\" (UID: \"9291524e-d650-4366-b795-162d53bf2815\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.260500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rftlz\" (UniqueName: \"kubernetes.io/projected/002e6b13-60c5-484c-8116-b4d5241ed678-kube-api-access-rftlz\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-54bpm\" (UID: \"002e6b13-60c5-484c-8116-b4d5241ed678\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.260566 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4cl\" (UniqueName: \"kubernetes.io/projected/4e773e83-c06c-47e9-8a34-ef72472e3ae8-kube-api-access-xk4cl\") pod \"nova-operator-controller-manager-79556f57fc-cxjcf\" (UID: \"4e773e83-c06c-47e9-8a34-ef72472e3ae8\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.261606 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.268610 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgzqq\" (UniqueName: \"kubernetes.io/projected/f0f31412-34be-4b9d-8df1-b53d23abb1f6-kube-api-access-qgzqq\") pod \"manila-operator-controller-manager-58bb8d67cc-s4q64\" (UID: \"f0f31412-34be-4b9d-8df1-b53d23abb1f6\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.288498 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.289356 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.302321 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.303566 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.323313 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.323884 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hc25b" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.324390 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.324958 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.344934 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6b5bm" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.369539 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33faed21-8b19-4064-a6e2-5064ce8cbab2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-njfjf\" (UID: \"33faed21-8b19-4064-a6e2-5064ce8cbab2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.369574 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-595q6\" (UniqueName: \"kubernetes.io/projected/9291524e-d650-4366-b795-162d53bf2815-kube-api-access-595q6\") pod \"neutron-operator-controller-manager-7c57c8bbc4-l7cv5\" (UID: \"9291524e-d650-4366-b795-162d53bf2815\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.369639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfcdm\" (UniqueName: \"kubernetes.io/projected/23471a89-c4fb-4e45-b7bb-2664e4ea99f3-kube-api-access-bfcdm\") pod \"octavia-operator-controller-manager-fd75fd47d-j5fsj\" (UID: \"23471a89-c4fb-4e45-b7bb-2664e4ea99f3\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.369670 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4cl\" (UniqueName: \"kubernetes.io/projected/4e773e83-c06c-47e9-8a34-ef72472e3ae8-kube-api-access-xk4cl\") pod \"nova-operator-controller-manager-79556f57fc-cxjcf\" (UID: \"4e773e83-c06c-47e9-8a34-ef72472e3ae8\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.369705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbqr6\" (UniqueName: \"kubernetes.io/projected/65361481-df4d-4010-a478-91fd2c50d9e6-kube-api-access-rbqr6\") pod \"ovn-operator-controller-manager-66cf5c67ff-wvv98\" (UID: \"65361481-df4d-4010-a478-91fd2c50d9e6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.378419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33faed21-8b19-4064-a6e2-5064ce8cbab2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-njfjf\" (UID: \"33faed21-8b19-4064-a6e2-5064ce8cbab2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.385515 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.386567 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.389369 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rftlz\" (UniqueName: \"kubernetes.io/projected/002e6b13-60c5-484c-8116-b4d5241ed678-kube-api-access-rftlz\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-54bpm\" (UID: \"002e6b13-60c5-484c-8116-b4d5241ed678\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.392180 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.393169 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.396999 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.415657 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.416373 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cxhr9" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.416521 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-p8p99" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.418808 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.420901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.422519 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.429412 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-595q6\" (UniqueName: \"kubernetes.io/projected/9291524e-d650-4366-b795-162d53bf2815-kube-api-access-595q6\") pod \"neutron-operator-controller-manager-7c57c8bbc4-l7cv5\" (UID: \"9291524e-d650-4366-b795-162d53bf2815\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.429852 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7bdqq" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.436329 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4cl\" (UniqueName: \"kubernetes.io/projected/4e773e83-c06c-47e9-8a34-ef72472e3ae8-kube-api-access-xk4cl\") pod \"nova-operator-controller-manager-79556f57fc-cxjcf\" (UID: \"4e773e83-c06c-47e9-8a34-ef72472e3ae8\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.462316 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.473941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbqr6\" (UniqueName: \"kubernetes.io/projected/65361481-df4d-4010-a478-91fd2c50d9e6-kube-api-access-rbqr6\") pod \"ovn-operator-controller-manager-66cf5c67ff-wvv98\" (UID: \"65361481-df4d-4010-a478-91fd2c50d9e6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.474119 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfcdm\" (UniqueName: \"kubernetes.io/projected/23471a89-c4fb-4e45-b7bb-2664e4ea99f3-kube-api-access-bfcdm\") pod \"octavia-operator-controller-manager-fd75fd47d-j5fsj\" (UID: \"23471a89-c4fb-4e45-b7bb-2664e4ea99f3\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.480016 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.481594 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.484317 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x9fjd" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.501167 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.513309 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.516073 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfcdm\" (UniqueName: \"kubernetes.io/projected/23471a89-c4fb-4e45-b7bb-2664e4ea99f3-kube-api-access-bfcdm\") pod \"octavia-operator-controller-manager-fd75fd47d-j5fsj\" (UID: \"23471a89-c4fb-4e45-b7bb-2664e4ea99f3\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.517256 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.519771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbqr6\" (UniqueName: \"kubernetes.io/projected/65361481-df4d-4010-a478-91fd2c50d9e6-kube-api-access-rbqr6\") pod \"ovn-operator-controller-manager-66cf5c67ff-wvv98\" (UID: \"65361481-df4d-4010-a478-91fd2c50d9e6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.533397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.533878 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.541681 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.558154 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-cr5ch"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.560510 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.563948 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9dpz6" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.564395 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gzjwq" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.588781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97stv\" (UniqueName: \"kubernetes.io/projected/cef58941-ae6b-4624-af41-65ab598838eb-kube-api-access-97stv\") pod \"telemetry-operator-controller-manager-567f98c9d-plxrr\" (UID: \"cef58941-ae6b-4624-af41-65ab598838eb\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.589016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzggd\" (UniqueName: \"kubernetes.io/projected/025ea53a-75d6-443a-965c-83ee12e37737-kube-api-access-xzggd\") pod \"test-operator-controller-manager-5cb74df96-zhdg8\" (UID: \"025ea53a-75d6-443a-965c-83ee12e37737\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.589123 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkxp\" (UniqueName: \"kubernetes.io/projected/59482a15-4638-4508-b60c-1c60c8df6d09-kube-api-access-spkxp\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.589234 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2sx\" (UniqueName: \"kubernetes.io/projected/6d9d0ad6-0976-4f14-81fb-f286f6768256-kube-api-access-9p2sx\") pod \"placement-operator-controller-manager-5db546f9d9-w4gcn\" (UID: \"6d9d0ad6-0976-4f14-81fb-f286f6768256\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.589363 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgbz\" (UniqueName: \"kubernetes.io/projected/0f496ee1-ca51-427f-a51d-4fc214c7f50a-kube-api-access-2jgbz\") pod \"watcher-operator-controller-manager-864885998-cr5ch\" (UID: \"0f496ee1-ca51-427f-a51d-4fc214c7f50a\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.589500 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr28c\" (UniqueName: \"kubernetes.io/projected/8aea8bb6-720b-412a-acfc-f62366da5de5-kube-api-access-mr28c\") pod \"swift-operator-controller-manager-6fdc4fcf86-pmw6n\" (UID: \"8aea8bb6-720b-412a-acfc-f62366da5de5\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.589610 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.602425 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.620389 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-cr5ch"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.649961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.706422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgbz\" (UniqueName: \"kubernetes.io/projected/0f496ee1-ca51-427f-a51d-4fc214c7f50a-kube-api-access-2jgbz\") pod \"watcher-operator-controller-manager-864885998-cr5ch\" (UID: \"0f496ee1-ca51-427f-a51d-4fc214c7f50a\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.706496 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr28c\" (UniqueName: \"kubernetes.io/projected/8aea8bb6-720b-412a-acfc-f62366da5de5-kube-api-access-mr28c\") pod \"swift-operator-controller-manager-6fdc4fcf86-pmw6n\" (UID: \"8aea8bb6-720b-412a-acfc-f62366da5de5\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.706524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.706575 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97stv\" (UniqueName: \"kubernetes.io/projected/cef58941-ae6b-4624-af41-65ab598838eb-kube-api-access-97stv\") pod \"telemetry-operator-controller-manager-567f98c9d-plxrr\" (UID: \"cef58941-ae6b-4624-af41-65ab598838eb\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.706601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzggd\" (UniqueName: \"kubernetes.io/projected/025ea53a-75d6-443a-965c-83ee12e37737-kube-api-access-xzggd\") pod \"test-operator-controller-manager-5cb74df96-zhdg8\" (UID: \"025ea53a-75d6-443a-965c-83ee12e37737\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.706624 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spkxp\" (UniqueName: \"kubernetes.io/projected/59482a15-4638-4508-b60c-1c60c8df6d09-kube-api-access-spkxp\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.706644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2sx\" (UniqueName: \"kubernetes.io/projected/6d9d0ad6-0976-4f14-81fb-f286f6768256-kube-api-access-9p2sx\") pod \"placement-operator-controller-manager-5db546f9d9-w4gcn\" (UID: \"6d9d0ad6-0976-4f14-81fb-f286f6768256\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 08:25:14 crc kubenswrapper[4760]: E1125 08:25:14.707343 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 08:25:14 crc kubenswrapper[4760]: E1125 08:25:14.707391 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert podName:59482a15-4638-4508-b60c-1c60c8df6d09 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:15.207374015 +0000 UTC m=+848.916404810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" (UID: "59482a15-4638-4508-b60c-1c60c8df6d09") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.738676 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.746633 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr28c\" (UniqueName: \"kubernetes.io/projected/8aea8bb6-720b-412a-acfc-f62366da5de5-kube-api-access-mr28c\") pod \"swift-operator-controller-manager-6fdc4fcf86-pmw6n\" (UID: \"8aea8bb6-720b-412a-acfc-f62366da5de5\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.747233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgbz\" (UniqueName: \"kubernetes.io/projected/0f496ee1-ca51-427f-a51d-4fc214c7f50a-kube-api-access-2jgbz\") pod \"watcher-operator-controller-manager-864885998-cr5ch\" (UID: \"0f496ee1-ca51-427f-a51d-4fc214c7f50a\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.757881 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2sx\" (UniqueName: \"kubernetes.io/projected/6d9d0ad6-0976-4f14-81fb-f286f6768256-kube-api-access-9p2sx\") pod \"placement-operator-controller-manager-5db546f9d9-w4gcn\" (UID: \"6d9d0ad6-0976-4f14-81fb-f286f6768256\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.761285 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.773435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spkxp\" (UniqueName: \"kubernetes.io/projected/59482a15-4638-4508-b60c-1c60c8df6d09-kube-api-access-spkxp\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.790766 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzggd\" (UniqueName: \"kubernetes.io/projected/025ea53a-75d6-443a-965c-83ee12e37737-kube-api-access-xzggd\") pod \"test-operator-controller-manager-5cb74df96-zhdg8\" (UID: \"025ea53a-75d6-443a-965c-83ee12e37737\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.824151 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.838237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97stv\" (UniqueName: \"kubernetes.io/projected/cef58941-ae6b-4624-af41-65ab598838eb-kube-api-access-97stv\") pod \"telemetry-operator-controller-manager-567f98c9d-plxrr\" (UID: \"cef58941-ae6b-4624-af41-65ab598838eb\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.866393 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.882421 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.883629 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.889421 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.889629 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jkb85" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.891557 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.895645 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.918131 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4"] Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.918919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twd5p\" (UniqueName: \"kubernetes.io/projected/c43ab37e-375d-4000-8313-9ea135250641-kube-api-access-twd5p\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.918959 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.918995 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.932675 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 08:25:14 crc kubenswrapper[4760]: I1125 08:25:14.976640 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.019882 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.019933 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.020055 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.020089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twd5p\" (UniqueName: \"kubernetes.io/projected/c43ab37e-375d-4000-8313-9ea135250641-kube-api-access-twd5p\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.020121 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-webhook-certs podName:c43ab37e-375d-4000-8313-9ea135250641 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:15.520102001 +0000 UTC m=+849.229132796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-wmmn4" (UID: "c43ab37e-375d-4000-8313-9ea135250641") : secret "webhook-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.020187 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.020267 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs podName:c43ab37e-375d-4000-8313-9ea135250641 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:15.520230905 +0000 UTC m=+849.229261700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-wmmn4" (UID: "c43ab37e-375d-4000-8313-9ea135250641") : secret "metrics-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.028095 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.029079 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.029104 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.029191 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.032656 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-fdpj8" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.052598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twd5p\" (UniqueName: \"kubernetes.io/projected/c43ab37e-375d-4000-8313-9ea135250641-kube-api-access-twd5p\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.201062 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8kw\" (UniqueName: \"kubernetes.io/projected/a9a9b42e-4d3b-495e-804e-af02af05581d-kube-api-access-6s8kw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5crqc\" (UID: \"a9a9b42e-4d3b-495e-804e-af02af05581d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.307025 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8kw\" (UniqueName: \"kubernetes.io/projected/a9a9b42e-4d3b-495e-804e-af02af05581d-kube-api-access-6s8kw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5crqc\" (UID: \"a9a9b42e-4d3b-495e-804e-af02af05581d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.307382 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.307546 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.307590 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert podName:59482a15-4638-4508-b60c-1c60c8df6d09 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:16.307575971 +0000 UTC m=+850.016606756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" (UID: "59482a15-4638-4508-b60c-1c60c8df6d09") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.342765 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8kw\" (UniqueName: \"kubernetes.io/projected/a9a9b42e-4d3b-495e-804e-af02af05581d-kube-api-access-6s8kw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5crqc\" (UID: \"a9a9b42e-4d3b-495e-804e-af02af05581d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.488047 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.492837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" event={"ID":"03a9ee81-2733-444d-8edc-ddb1303b5686","Type":"ContainerStarted","Data":"7d05d17f9a454e276fb04621e4aa00c41d321b34eadb69d2ce277863a2a3d4b6"} Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.539798 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.544516 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.596342 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.618656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.618705 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.618866 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: E1125 08:25:15.618953 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs podName:c43ab37e-375d-4000-8313-9ea135250641 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:16.618933548 +0000 UTC m=+850.327964333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-wmmn4" (UID: "c43ab37e-375d-4000-8313-9ea135250641") : secret "metrics-server-cert" not found Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.623046 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.821887 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-l24ns"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.831150 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v"] Nov 25 08:25:15 crc kubenswrapper[4760]: I1125 08:25:15.842276 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64"] Nov 25 08:25:15 crc kubenswrapper[4760]: W1125 08:25:15.847757 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d556614_e3c1_4834_919a_0c6f5f5cc4de.slice/crio-30978807bd0bce6ef1f247b14e10767f1d9ce4ec576ac546083a596fd067f34c WatchSource:0}: Error finding container 30978807bd0bce6ef1f247b14e10767f1d9ce4ec576ac546083a596fd067f34c: Status 404 returned error can't find the container with id 30978807bd0bce6ef1f247b14e10767f1d9ce4ec576ac546083a596fd067f34c Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.034491 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf"] Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.038453 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr"] Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.076521 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44"] Nov 25 08:25:16 crc kubenswrapper[4760]: W1125 08:25:16.087941 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dde35ac_ff01_4e46_9eae_234e6abc37dc.slice/crio-e73f21c96f3bf5b61164bb2f23609edcf786bfa23a7becabbe6baf54c4d289fd WatchSource:0}: Error finding container e73f21c96f3bf5b61164bb2f23609edcf786bfa23a7becabbe6baf54c4d289fd: Status 404 returned error can't find the container with id e73f21c96f3bf5b61164bb2f23609edcf786bfa23a7becabbe6baf54c4d289fd Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.239236 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5"] Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.269094 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-cr5ch"] Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.282630 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf"] Nov 25 08:25:16 crc kubenswrapper[4760]: W1125 08:25:16.282852 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod002e6b13_60c5_484c_8116_b4d5241ed678.slice/crio-0b8bd81fdf6527df5184c352250d5f80c03e0f43dabfa7d9f65c0ca7057a00f1 WatchSource:0}: Error finding container 0b8bd81fdf6527df5184c352250d5f80c03e0f43dabfa7d9f65c0ca7057a00f1: Status 404 returned error can't find the container with id 0b8bd81fdf6527df5184c352250d5f80c03e0f43dabfa7d9f65c0ca7057a00f1 Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.285676 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xk4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-cxjcf_openstack-operators(4e773e83-c06c-47e9-8a34-ef72472e3ae8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.288634 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xk4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-cxjcf_openstack-operators(4e773e83-c06c-47e9-8a34-ef72472e3ae8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.289009 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj"] Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.289932 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.293344 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rftlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-54bpm_openstack-operators(002e6b13-60c5-484c-8116-b4d5241ed678): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.294580 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98"] Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.295075 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rftlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-54bpm_openstack-operators(002e6b13-60c5-484c-8116-b4d5241ed678): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.296852 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.299581 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm"] Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.337078 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.337307 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.337367 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert podName:59482a15-4638-4508-b60c-1c60c8df6d09 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:18.337347308 +0000 UTC m=+852.046378103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" (UID: "59482a15-4638-4508-b60c-1c60c8df6d09") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.434968 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8"] Nov 25 08:25:16 crc kubenswrapper[4760]: W1125 08:25:16.443099 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod025ea53a_75d6_443a_965c_83ee12e37737.slice/crio-81016787006dbf03c90e8deab574132617c9f8561dc1663c0df8fec5063f831f WatchSource:0}: Error finding container 81016787006dbf03c90e8deab574132617c9f8561dc1663c0df8fec5063f831f: Status 404 returned error can't find the container with id 81016787006dbf03c90e8deab574132617c9f8561dc1663c0df8fec5063f831f Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.510898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" event={"ID":"9291524e-d650-4366-b795-162d53bf2815","Type":"ContainerStarted","Data":"e10298fb498499ab7a504127277fb4ff798d6493cc04c91489de117e9910f5c7"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.512669 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" event={"ID":"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129","Type":"ContainerStarted","Data":"ffe857fe449ae84d69bb920d3dfd9cf35c489cfd85ef6b244388764062af48e0"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.516744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" event={"ID":"65361481-df4d-4010-a478-91fd2c50d9e6","Type":"ContainerStarted","Data":"8816b62ca3bc3c1077dff53599188cdcec9cb930f7e633ce1149fb35b516ca90"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.518332 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" event={"ID":"b4325bd6-c276-4fbc-bc67-cf5a026c3537","Type":"ContainerStarted","Data":"7cfadd117e120049cdc5d631773082ca674e2da05ee9675022f07eb6108c5dce"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.519827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" event={"ID":"002e6b13-60c5-484c-8116-b4d5241ed678","Type":"ContainerStarted","Data":"0b8bd81fdf6527df5184c352250d5f80c03e0f43dabfa7d9f65c0ca7057a00f1"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.522091 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" event={"ID":"25f372bf-e250-492b-abb9-680b1efdbdec","Type":"ContainerStarted","Data":"cc66dee0131037bd3a9d5403c7e2d1ffcdafb1fabb76e83f2c565ad3212b8f53"} Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.522236 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.525475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" event={"ID":"f0f31412-34be-4b9d-8df1-b53d23abb1f6","Type":"ContainerStarted","Data":"cceb05449a2d1b28ca4c97318a90233f1d8d81484c8ba733034bc8039a317d83"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.531918 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" event={"ID":"4e773e83-c06c-47e9-8a34-ef72472e3ae8","Type":"ContainerStarted","Data":"5d22b8b53440b77163a5c3727588ddacac6ba13a6e6b7bd3bbe52641b30a0522"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.551734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" event={"ID":"025ea53a-75d6-443a-965c-83ee12e37737","Type":"ContainerStarted","Data":"81016787006dbf03c90e8deab574132617c9f8561dc1663c0df8fec5063f831f"} Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.552231 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.556261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" event={"ID":"33faed21-8b19-4064-a6e2-5064ce8cbab2","Type":"ContainerStarted","Data":"3cf7bd8b9e662066f864967beb5a4d4ed7db3de220aad03411701970417f525f"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.561181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" event={"ID":"23471a89-c4fb-4e45-b7bb-2664e4ea99f3","Type":"ContainerStarted","Data":"31a6bea9f856adaec89b893c3e943d0883391a6bde25456eeb6ca8bbfe2d73ed"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.566516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" event={"ID":"890067e5-2be8-4699-8d90-f2771ef453e5","Type":"ContainerStarted","Data":"c59da0e679a76da1d64ae0387b3aa23ee2f3836060426c30764820a77d6a4f2d"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.568405 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" event={"ID":"6dde35ac-ff01-4e46-9eae-234e6abc37dc","Type":"ContainerStarted","Data":"e73f21c96f3bf5b61164bb2f23609edcf786bfa23a7becabbe6baf54c4d289fd"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.572026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" event={"ID":"97e97ce2-b50b-478e-acb2-cbdd5232d67c","Type":"ContainerStarted","Data":"f661e6acf29d09703b7b515f75cb1286c9de3c9dfa9127d6baadc8f4b6ca02a1"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.573491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" event={"ID":"1d556614-e3c1-4834-919a-0c6f5f5cc4de","Type":"ContainerStarted","Data":"30978807bd0bce6ef1f247b14e10767f1d9ce4ec576ac546083a596fd067f34c"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.574701 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" event={"ID":"0f496ee1-ca51-427f-a51d-4fc214c7f50a","Type":"ContainerStarted","Data":"ed47459d3e6c3089db5417da96e190fc50295e57fc3ae80c36b65df121f43cde"} Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.655198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.655397 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.655451 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs podName:c43ab37e-375d-4000-8313-9ea135250641 nodeName:}" failed. No retries permitted until 2025-11-25 08:25:18.655434179 +0000 UTC m=+852.364464974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-wmmn4" (UID: "c43ab37e-375d-4000-8313-9ea135250641") : secret "metrics-server-cert" not found Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.656860 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc"] Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.662289 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n"] Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.667864 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr"] Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.699577 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mr28c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-pmw6n_openstack-operators(8aea8bb6-720b-412a-acfc-f62366da5de5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: I1125 08:25:16.702163 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn"] Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.703377 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mr28c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-pmw6n_openstack-operators(8aea8bb6-720b-412a-acfc-f62366da5de5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.705373 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 08:25:16 crc kubenswrapper[4760]: W1125 08:25:16.708672 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcef58941_ae6b_4624_af41_65ab598838eb.slice/crio-e084f73c6040739d32dceb80a7488c53f9da6c488aa3ada872027fd4b9237f50 WatchSource:0}: Error finding container e084f73c6040739d32dceb80a7488c53f9da6c488aa3ada872027fd4b9237f50: Status 404 returned error can't find the container with id e084f73c6040739d32dceb80a7488c53f9da6c488aa3ada872027fd4b9237f50 Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.712000 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97stv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-plxrr_openstack-operators(cef58941-ae6b-4624-af41-65ab598838eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.716992 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97stv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-plxrr_openstack-operators(cef58941-ae6b-4624-af41-65ab598838eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 08:25:16 crc kubenswrapper[4760]: E1125 08:25:16.718409 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 08:25:16 crc kubenswrapper[4760]: W1125 08:25:16.834917 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d9d0ad6_0976_4f14_81fb_f286f6768256.slice/crio-f594fcdccf3891ff5f0ad646abebeb590a63ab84b60854c3b349bb55c6c7d2d3 WatchSource:0}: Error finding container f594fcdccf3891ff5f0ad646abebeb590a63ab84b60854c3b349bb55c6c7d2d3: Status 404 returned error can't find the container with id f594fcdccf3891ff5f0ad646abebeb590a63ab84b60854c3b349bb55c6c7d2d3 Nov 25 08:25:17 crc kubenswrapper[4760]: I1125 08:25:17.583128 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" event={"ID":"cef58941-ae6b-4624-af41-65ab598838eb","Type":"ContainerStarted","Data":"e084f73c6040739d32dceb80a7488c53f9da6c488aa3ada872027fd4b9237f50"} Nov 25 08:25:17 crc kubenswrapper[4760]: I1125 08:25:17.585454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" event={"ID":"8aea8bb6-720b-412a-acfc-f62366da5de5","Type":"ContainerStarted","Data":"0a35d38c375da3c3f8efdef3f56828924e418ffcd966292158a2909737d7a880"} Nov 25 08:25:17 crc kubenswrapper[4760]: E1125 08:25:17.592961 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 08:25:17 crc kubenswrapper[4760]: E1125 08:25:17.593069 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 08:25:17 crc kubenswrapper[4760]: I1125 08:25:17.607738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" event={"ID":"6d9d0ad6-0976-4f14-81fb-f286f6768256","Type":"ContainerStarted","Data":"f594fcdccf3891ff5f0ad646abebeb590a63ab84b60854c3b349bb55c6c7d2d3"} Nov 25 08:25:17 crc kubenswrapper[4760]: I1125 08:25:17.609663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" event={"ID":"a9a9b42e-4d3b-495e-804e-af02af05581d","Type":"ContainerStarted","Data":"daef0016a97cb334476277cc68439e59f95f65c0e77b3619673992c51a89bdbb"} Nov 25 08:25:17 crc kubenswrapper[4760]: E1125 08:25:17.616754 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 08:25:17 crc kubenswrapper[4760]: E1125 08:25:17.618169 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 08:25:18 crc kubenswrapper[4760]: I1125 08:25:18.405734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:18 crc kubenswrapper[4760]: I1125 08:25:18.431768 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59482a15-4638-4508-b60c-1c60c8df6d09-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-c8gdx\" (UID: \"59482a15-4638-4508-b60c-1c60c8df6d09\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:18 crc kubenswrapper[4760]: E1125 08:25:18.634591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 08:25:18 crc kubenswrapper[4760]: E1125 08:25:18.634615 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 08:25:18 crc kubenswrapper[4760]: I1125 08:25:18.701452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:18 crc kubenswrapper[4760]: I1125 08:25:18.717011 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:18 crc kubenswrapper[4760]: I1125 08:25:18.722356 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c43ab37e-375d-4000-8313-9ea135250641-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-wmmn4\" (UID: \"c43ab37e-375d-4000-8313-9ea135250641\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:18 crc kubenswrapper[4760]: I1125 08:25:18.854708 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:30 crc kubenswrapper[4760]: E1125 08:25:30.429749 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 25 08:25:30 crc kubenswrapper[4760]: E1125 08:25:30.430543 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9p2sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-w4gcn_openstack-operators(6d9d0ad6-0976-4f14-81fb-f286f6768256): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:25:31 crc kubenswrapper[4760]: E1125 08:25:31.051324 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 25 08:25:31 crc kubenswrapper[4760]: E1125 08:25:31.051561 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4m5x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-l28cr_openstack-operators(890067e5-2be8-4699-8d90-f2771ef453e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:25:34 crc kubenswrapper[4760]: I1125 08:25:34.463917 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx"] Nov 25 08:25:34 crc kubenswrapper[4760]: E1125 08:25:34.656543 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 25 08:25:34 crc kubenswrapper[4760]: E1125 08:25:34.656744 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jgbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-cr5ch_openstack-operators(0f496ee1-ca51-427f-a51d-4fc214c7f50a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:25:35 crc kubenswrapper[4760]: E1125 08:25:35.092611 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 25 08:25:35 crc kubenswrapper[4760]: E1125 08:25:35.092833 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6s8kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5crqc_openstack-operators(a9a9b42e-4d3b-495e-804e-af02af05581d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:25:35 crc kubenswrapper[4760]: E1125 08:25:35.094011 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" podUID="a9a9b42e-4d3b-495e-804e-af02af05581d" Nov 25 08:25:35 crc kubenswrapper[4760]: W1125 08:25:35.529676 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59482a15_4638_4508_b60c_1c60c8df6d09.slice/crio-46effdcc2b4e0cd9dc4893bd39586cb68d2413cc67a1bcef4508223e2bb4d7af WatchSource:0}: Error finding container 46effdcc2b4e0cd9dc4893bd39586cb68d2413cc67a1bcef4508223e2bb4d7af: Status 404 returned error can't find the container with id 46effdcc2b4e0cd9dc4893bd39586cb68d2413cc67a1bcef4508223e2bb4d7af Nov 25 08:25:35 crc kubenswrapper[4760]: I1125 08:25:35.770098 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" event={"ID":"59482a15-4638-4508-b60c-1c60c8df6d09","Type":"ContainerStarted","Data":"46effdcc2b4e0cd9dc4893bd39586cb68d2413cc67a1bcef4508223e2bb4d7af"} Nov 25 08:25:35 crc kubenswrapper[4760]: E1125 08:25:35.773463 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" podUID="a9a9b42e-4d3b-495e-804e-af02af05581d" Nov 25 08:25:35 crc kubenswrapper[4760]: I1125 08:25:35.945337 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4"] Nov 25 08:25:36 crc kubenswrapper[4760]: W1125 08:25:36.141785 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc43ab37e_375d_4000_8313_9ea135250641.slice/crio-c46bf7434f4f72d29520eb27eb25fbe037521beae2a465774f52a665a933cc12 WatchSource:0}: Error finding container c46bf7434f4f72d29520eb27eb25fbe037521beae2a465774f52a665a933cc12: Status 404 returned error can't find the container with id c46bf7434f4f72d29520eb27eb25fbe037521beae2a465774f52a665a933cc12 Nov 25 08:25:36 crc kubenswrapper[4760]: I1125 08:25:36.779642 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" event={"ID":"c43ab37e-375d-4000-8313-9ea135250641","Type":"ContainerStarted","Data":"c46bf7434f4f72d29520eb27eb25fbe037521beae2a465774f52a665a933cc12"} Nov 25 08:25:37 crc kubenswrapper[4760]: I1125 08:25:37.786662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" event={"ID":"1d556614-e3c1-4834-919a-0c6f5f5cc4de","Type":"ContainerStarted","Data":"0a118edce1f40fbbdd6a99feb6b0792560535a8e4c818798859296dcbbce765f"} Nov 25 08:25:43 crc kubenswrapper[4760]: I1125 08:25:43.846398 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" event={"ID":"c43ab37e-375d-4000-8313-9ea135250641","Type":"ContainerStarted","Data":"62375c6e6c46b8016b2db27f4ad6c08e80140b22fe4b9645f5ad386d7d26929f"} Nov 25 08:25:43 crc kubenswrapper[4760]: I1125 08:25:43.847372 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" event={"ID":"025ea53a-75d6-443a-965c-83ee12e37737","Type":"ContainerStarted","Data":"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5"} Nov 25 08:25:44 crc kubenswrapper[4760]: I1125 08:25:44.853096 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:44 crc kubenswrapper[4760]: I1125 08:25:44.886953 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" podStartSLOduration=30.886928565 podStartE2EDuration="30.886928565s" podCreationTimestamp="2025-11-25 08:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:25:44.882065255 +0000 UTC m=+878.591096050" watchObservedRunningTime="2025-11-25 08:25:44.886928565 +0000 UTC m=+878.595959360" Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.869314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" event={"ID":"f0f31412-34be-4b9d-8df1-b53d23abb1f6","Type":"ContainerStarted","Data":"a42195511db5c10b0b1cb254dbbfec7cd13dc0f4b1a554e46fa8f18c39064ba7"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.871127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" event={"ID":"33faed21-8b19-4064-a6e2-5064ce8cbab2","Type":"ContainerStarted","Data":"f374509f532646a61b20dd3beddfed971429fa3250d97e3645c9b0a746a8e178"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.872432 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" event={"ID":"9291524e-d650-4366-b795-162d53bf2815","Type":"ContainerStarted","Data":"3773418666d6c1aa765b32572dc5f7d2064dce044f3934d02959369d7bc6b072"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.874132 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" event={"ID":"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129","Type":"ContainerStarted","Data":"53225b6ce3c8a83c4ad7786e8ecd947b524c027578b930d4ea2430a141b6896b"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.882605 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" event={"ID":"25f372bf-e250-492b-abb9-680b1efdbdec","Type":"ContainerStarted","Data":"78d35fa844f9306bf9e9c781f238abe91ee4e07a9af371de8b90edc168d0f3fc"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.889950 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" event={"ID":"6dde35ac-ff01-4e46-9eae-234e6abc37dc","Type":"ContainerStarted","Data":"365799acb56992f20ec49ef9a96eb81e58cf921aa96746555ab528df55407607"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.891266 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" event={"ID":"65361481-df4d-4010-a478-91fd2c50d9e6","Type":"ContainerStarted","Data":"963bf83dfc51cf642f3f5f4f3376f99812aecbff49365eaf6e05541ce2015fe4"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.892546 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" event={"ID":"97e97ce2-b50b-478e-acb2-cbdd5232d67c","Type":"ContainerStarted","Data":"476af3dd083d0a100d050519fda6d03ee35e63ecb50e5fd2c9e8258c54fc91bc"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.896084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" event={"ID":"b4325bd6-c276-4fbc-bc67-cf5a026c3537","Type":"ContainerStarted","Data":"3bf03df4953d259610af803731deb9aaf22d28bcc3b549ed11c7093e123d5b4a"} Nov 25 08:25:45 crc kubenswrapper[4760]: I1125 08:25:45.898155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" event={"ID":"03a9ee81-2733-444d-8edc-ddb1303b5686","Type":"ContainerStarted","Data":"6cc06ddc45048296f24515a3ee7d592b625eb990c80087a56169b579e5f0d1c1"} Nov 25 08:25:47 crc kubenswrapper[4760]: I1125 08:25:47.919617 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" event={"ID":"23471a89-c4fb-4e45-b7bb-2664e4ea99f3","Type":"ContainerStarted","Data":"c80d0f86ae9c63a6bfaf2e60dba603165038ea221ed371e05df3887f97c065df"} Nov 25 08:25:47 crc kubenswrapper[4760]: I1125 08:25:47.921335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" event={"ID":"cef58941-ae6b-4624-af41-65ab598838eb","Type":"ContainerStarted","Data":"f78b709dbad6c6e20e05142a94c68fe4609db950922312a6d6a99c81f12b12ef"} Nov 25 08:25:47 crc kubenswrapper[4760]: I1125 08:25:47.923383 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" event={"ID":"8aea8bb6-720b-412a-acfc-f62366da5de5","Type":"ContainerStarted","Data":"bba1c0376c5c153ef9c035da71b8692fdf23af163211330cafffdcc7b4fdc3c5"} Nov 25 08:25:47 crc kubenswrapper[4760]: I1125 08:25:47.924790 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" event={"ID":"002e6b13-60c5-484c-8116-b4d5241ed678","Type":"ContainerStarted","Data":"ecb09c60390c5382a076a4d52832e1347803837617cc7a39429f6e75e369f0a6"} Nov 25 08:25:48 crc kubenswrapper[4760]: I1125 08:25:48.869206 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 08:25:48 crc kubenswrapper[4760]: I1125 08:25:48.963183 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" event={"ID":"4e773e83-c06c-47e9-8a34-ef72472e3ae8","Type":"ContainerStarted","Data":"fdea6e7cda5309041600d82d5850e20001daf4f49aa39c5b2bf0aa27a453ca9a"} Nov 25 08:25:48 crc kubenswrapper[4760]: I1125 08:25:48.976191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" event={"ID":"59482a15-4638-4508-b60c-1c60c8df6d09","Type":"ContainerStarted","Data":"c5cbe35e0f38c2d8743b4705cf0e0dd18fb4499fb5307499421fb426460d6a49"} Nov 25 08:25:49 crc kubenswrapper[4760]: E1125 08:25:49.138432 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" podUID="0f496ee1-ca51-427f-a51d-4fc214c7f50a" Nov 25 08:25:49 crc kubenswrapper[4760]: E1125 08:25:49.204802 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" podUID="6d9d0ad6-0976-4f14-81fb-f286f6768256" Nov 25 08:25:49 crc kubenswrapper[4760]: E1125 08:25:49.563986 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" podUID="890067e5-2be8-4699-8d90-f2771ef453e5" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.014734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" event={"ID":"1d556614-e3c1-4834-919a-0c6f5f5cc4de","Type":"ContainerStarted","Data":"f2edda55ff1a8e249db662dd703b45c10596b96bb79684d34ff857c20b8ff3c9"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.015128 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.018207 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.021472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" event={"ID":"890067e5-2be8-4699-8d90-f2771ef453e5","Type":"ContainerStarted","Data":"716c0feec950e7f613dd92fbafdae4606f0aa63635654e1b6eb0bfc54ca51be2"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.023061 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" event={"ID":"8aea8bb6-720b-412a-acfc-f62366da5de5","Type":"ContainerStarted","Data":"b9793e88793f45c387e95dc9c2b56a7e3700df2cfb58d13942010028c556c2a3"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.023235 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.027103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" event={"ID":"002e6b13-60c5-484c-8116-b4d5241ed678","Type":"ContainerStarted","Data":"f31faad1c6ec644b00ac0a1271f07892bf550a5b9e501906e03e020a1c66c831"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.027637 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.040486 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" podStartSLOduration=4.270209675 podStartE2EDuration="37.040468087s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:15.854867903 +0000 UTC m=+849.563898698" lastFinishedPulling="2025-11-25 08:25:48.625126315 +0000 UTC m=+882.334157110" observedRunningTime="2025-11-25 08:25:50.034388972 +0000 UTC m=+883.743419767" watchObservedRunningTime="2025-11-25 08:25:50.040468087 +0000 UTC m=+883.749498882" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.043439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" event={"ID":"03a9ee81-2733-444d-8edc-ddb1303b5686","Type":"ContainerStarted","Data":"45301cfd774f0a0a05d82977bd490f8cd3e5b22ac864cd06e24a87a30685bf70"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.044283 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.048538 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.054179 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" event={"ID":"6d9d0ad6-0976-4f14-81fb-f286f6768256","Type":"ContainerStarted","Data":"5f931de6d61d74add81a6c3aec221fb8abab8913010bef8d4661144edff5088a"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.056887 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" event={"ID":"f0f31412-34be-4b9d-8df1-b53d23abb1f6","Type":"ContainerStarted","Data":"8cd036a04f99cbf83b8ab642b02beb65b4812b9f0f07793db00c8086b4c3a175"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.057092 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.059725 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.070508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" event={"ID":"b4325bd6-c276-4fbc-bc67-cf5a026c3537","Type":"ContainerStarted","Data":"c5feed17efcbe350dbd53ebabbf0dbb268674ab99049896ac75d885086d09d9d"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.071321 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.075960 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.082126 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" event={"ID":"59482a15-4638-4508-b60c-1c60c8df6d09","Type":"ContainerStarted","Data":"7fd9fc47c648dbd754877a2fee21a4d2c7c6781bc3a603ffac174a34b491d4d2"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.082983 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.086407 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podStartSLOduration=4.558322523 podStartE2EDuration="37.08638038s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.293159626 +0000 UTC m=+850.002190421" lastFinishedPulling="2025-11-25 08:25:48.821217483 +0000 UTC m=+882.530248278" observedRunningTime="2025-11-25 08:25:50.083086505 +0000 UTC m=+883.792117300" watchObservedRunningTime="2025-11-25 08:25:50.08638038 +0000 UTC m=+883.795411175" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.122816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" event={"ID":"0f496ee1-ca51-427f-a51d-4fc214c7f50a","Type":"ContainerStarted","Data":"768095f307eec7cc1adc2e4b139a9a0ff1a6ce8cfafb826c1544c351391b2b13"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.126990 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podStartSLOduration=5.040886441 podStartE2EDuration="37.126967719s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.699412326 +0000 UTC m=+850.408443121" lastFinishedPulling="2025-11-25 08:25:48.785493604 +0000 UTC m=+882.494524399" observedRunningTime="2025-11-25 08:25:50.122824529 +0000 UTC m=+883.831855324" watchObservedRunningTime="2025-11-25 08:25:50.126967719 +0000 UTC m=+883.835998514" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.159459 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.160681 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.164062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" event={"ID":"6dde35ac-ff01-4e46-9eae-234e6abc37dc","Type":"ContainerStarted","Data":"b5cb2339215a8c2d85800c796f604cb4e98b1c76469a0f16945af9df90601312"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.165079 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.171652 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.189282 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" event={"ID":"97e97ce2-b50b-478e-acb2-cbdd5232d67c","Type":"ContainerStarted","Data":"632de19a702db1c039d1144f4a7f93846b18d4dd7206d36ade007c0b27c49e69"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.191561 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.198091 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.230984 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" event={"ID":"23471a89-c4fb-4e45-b7bb-2664e4ea99f3","Type":"ContainerStarted","Data":"660cbc9fbd42e60c49db0890b8e3237b4ae9fc02be26774926e1c6a9d2d4d9e3"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.232000 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.242082 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" event={"ID":"a9a9b42e-4d3b-495e-804e-af02af05581d","Type":"ContainerStarted","Data":"1ed89f44c4cb3d5308462671a5bfdb712260c4f51b4f04768897c8a3c4d206f6"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.242422 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" podStartSLOduration=3.665856149 podStartE2EDuration="37.242400193s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:15.210049572 +0000 UTC m=+848.919080367" lastFinishedPulling="2025-11-25 08:25:48.786593616 +0000 UTC m=+882.495624411" observedRunningTime="2025-11-25 08:25:50.1888186 +0000 UTC m=+883.897849395" watchObservedRunningTime="2025-11-25 08:25:50.242400193 +0000 UTC m=+883.951430978" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.247029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" event={"ID":"cef58941-ae6b-4624-af41-65ab598838eb","Type":"ContainerStarted","Data":"927d63be5465b3bb2d5e22c8d713e17d67e558f1f24a90fbf7b6af6c333f592d"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.247795 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.284051 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" event={"ID":"9291524e-d650-4366-b795-162d53bf2815","Type":"ContainerStarted","Data":"ec3ab11e07a22113422d753beeb790bdeb2896db09964c6bf09102d615e68cf7"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.285269 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.291303 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.296698 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" event={"ID":"25f372bf-e250-492b-abb9-680b1efdbdec","Type":"ContainerStarted","Data":"7f7c2b1bd34413be53125d132180e0df59b7be8efff7133b6d4684e0b3cc2b07"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.297541 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.311437 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" event={"ID":"65361481-df4d-4010-a478-91fd2c50d9e6","Type":"ContainerStarted","Data":"94b88f9ebb1d009cd681429a062426b5d267ce06b8d0a9d051ca521f893aa43a"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.311909 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" podStartSLOduration=4.423843731 podStartE2EDuration="37.311897585s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:15.850757455 +0000 UTC m=+849.559788250" lastFinishedPulling="2025-11-25 08:25:48.738811309 +0000 UTC m=+882.447842104" observedRunningTime="2025-11-25 08:25:50.272692555 +0000 UTC m=+883.981723350" watchObservedRunningTime="2025-11-25 08:25:50.311897585 +0000 UTC m=+884.020928380" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.312520 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.312748 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.333553 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.335429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" event={"ID":"4e773e83-c06c-47e9-8a34-ef72472e3ae8","Type":"ContainerStarted","Data":"32eb268250260c348f166a4bb491ef601ddb6c4401bb6a5021ed0fcee239ab8f"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.336200 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.344222 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" podStartSLOduration=4.565041707 podStartE2EDuration="37.344204485s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:15.850755565 +0000 UTC m=+849.559786370" lastFinishedPulling="2025-11-25 08:25:48.629918353 +0000 UTC m=+882.338949148" observedRunningTime="2025-11-25 08:25:50.343531246 +0000 UTC m=+884.052562041" watchObservedRunningTime="2025-11-25 08:25:50.344204485 +0000 UTC m=+884.053235280" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.347552 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" podStartSLOduration=25.29957117 podStartE2EDuration="37.347527621s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:35.533544748 +0000 UTC m=+869.242575553" lastFinishedPulling="2025-11-25 08:25:47.581501209 +0000 UTC m=+881.290532004" observedRunningTime="2025-11-25 08:25:50.313175351 +0000 UTC m=+884.022206166" watchObservedRunningTime="2025-11-25 08:25:50.347527621 +0000 UTC m=+884.056558416" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.353000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" event={"ID":"025ea53a-75d6-443a-965c-83ee12e37737","Type":"ContainerStarted","Data":"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.354097 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.370636 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.381459 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" event={"ID":"33faed21-8b19-4064-a6e2-5064ce8cbab2","Type":"ContainerStarted","Data":"99b7181a9d551bb0dcef301f83522b3bba0ec27a0f8bbade34504d1f8e2c89c9"} Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.382498 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.424268 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.477064 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podStartSLOduration=6.213379519 podStartE2EDuration="37.477044471s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.285480415 +0000 UTC m=+849.994511210" lastFinishedPulling="2025-11-25 08:25:47.549145357 +0000 UTC m=+881.258176162" observedRunningTime="2025-11-25 08:25:50.439664054 +0000 UTC m=+884.148694859" watchObservedRunningTime="2025-11-25 08:25:50.477044471 +0000 UTC m=+884.186075276" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.512303 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" podStartSLOduration=3.982773837 podStartE2EDuration="37.512283866s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:15.655870822 +0000 UTC m=+849.364901617" lastFinishedPulling="2025-11-25 08:25:49.185380861 +0000 UTC m=+882.894411646" observedRunningTime="2025-11-25 08:25:50.478667058 +0000 UTC m=+884.187697873" watchObservedRunningTime="2025-11-25 08:25:50.512283866 +0000 UTC m=+884.221314661" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.549285 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" podStartSLOduration=4.472755188 podStartE2EDuration="36.54924026s" podCreationTimestamp="2025-11-25 08:25:14 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.694419842 +0000 UTC m=+850.403450637" lastFinishedPulling="2025-11-25 08:25:48.770904914 +0000 UTC m=+882.479935709" observedRunningTime="2025-11-25 08:25:50.513613014 +0000 UTC m=+884.222643809" watchObservedRunningTime="2025-11-25 08:25:50.54924026 +0000 UTC m=+884.258271055" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.564454 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" podStartSLOduration=5.196580335 podStartE2EDuration="37.564431988s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.278980107 +0000 UTC m=+849.988010912" lastFinishedPulling="2025-11-25 08:25:48.64683177 +0000 UTC m=+882.355862565" observedRunningTime="2025-11-25 08:25:50.541365833 +0000 UTC m=+884.250396658" watchObservedRunningTime="2025-11-25 08:25:50.564431988 +0000 UTC m=+884.273462783" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.574987 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" podStartSLOduration=4.325864768 podStartE2EDuration="37.574968611s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.090320674 +0000 UTC m=+849.799351469" lastFinishedPulling="2025-11-25 08:25:49.339424517 +0000 UTC m=+883.048455312" observedRunningTime="2025-11-25 08:25:50.572712956 +0000 UTC m=+884.281743751" watchObservedRunningTime="2025-11-25 08:25:50.574968611 +0000 UTC m=+884.283999406" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.596320 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" podStartSLOduration=4.448893822 podStartE2EDuration="37.596301646s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:15.524448827 +0000 UTC m=+849.233479622" lastFinishedPulling="2025-11-25 08:25:48.671856651 +0000 UTC m=+882.380887446" observedRunningTime="2025-11-25 08:25:50.595199074 +0000 UTC m=+884.304229869" watchObservedRunningTime="2025-11-25 08:25:50.596301646 +0000 UTC m=+884.305332451" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.655065 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" podStartSLOduration=4.029602445 podStartE2EDuration="37.655049007s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:15.662481382 +0000 UTC m=+849.371512177" lastFinishedPulling="2025-11-25 08:25:49.287927944 +0000 UTC m=+882.996958739" observedRunningTime="2025-11-25 08:25:50.653968026 +0000 UTC m=+884.362998821" watchObservedRunningTime="2025-11-25 08:25:50.655049007 +0000 UTC m=+884.364079802" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.659578 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podStartSLOduration=4.583852177 podStartE2EDuration="36.659559337s" podCreationTimestamp="2025-11-25 08:25:14 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.711859304 +0000 UTC m=+850.420890099" lastFinishedPulling="2025-11-25 08:25:48.787566464 +0000 UTC m=+882.496597259" observedRunningTime="2025-11-25 08:25:50.63188537 +0000 UTC m=+884.340916185" watchObservedRunningTime="2025-11-25 08:25:50.659559337 +0000 UTC m=+884.368590132" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.678317 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" podStartSLOduration=4.773674625 podStartE2EDuration="37.678300787s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.279379999 +0000 UTC m=+849.988410794" lastFinishedPulling="2025-11-25 08:25:49.184006161 +0000 UTC m=+882.893036956" observedRunningTime="2025-11-25 08:25:50.677984068 +0000 UTC m=+884.387014863" watchObservedRunningTime="2025-11-25 08:25:50.678300787 +0000 UTC m=+884.387331582" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.766992 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" podStartSLOduration=3.9786636189999998 podStartE2EDuration="36.766964591s" podCreationTimestamp="2025-11-25 08:25:14 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.444785763 +0000 UTC m=+850.153816548" lastFinishedPulling="2025-11-25 08:25:49.233086725 +0000 UTC m=+882.942117520" observedRunningTime="2025-11-25 08:25:50.726376152 +0000 UTC m=+884.435406967" watchObservedRunningTime="2025-11-25 08:25:50.766964591 +0000 UTC m=+884.475995386" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.768177 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" podStartSLOduration=5.032802918 podStartE2EDuration="37.768170445s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.052625399 +0000 UTC m=+849.761656184" lastFinishedPulling="2025-11-25 08:25:48.787992916 +0000 UTC m=+882.497023711" observedRunningTime="2025-11-25 08:25:50.752725361 +0000 UTC m=+884.461756146" watchObservedRunningTime="2025-11-25 08:25:50.768170445 +0000 UTC m=+884.477201240" Nov 25 08:25:50 crc kubenswrapper[4760]: I1125 08:25:50.796683 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" podStartSLOduration=4.889190791 podStartE2EDuration="37.796656305s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.27733233 +0000 UTC m=+849.986363125" lastFinishedPulling="2025-11-25 08:25:49.184797844 +0000 UTC m=+882.893828639" observedRunningTime="2025-11-25 08:25:50.787127901 +0000 UTC m=+884.496158696" watchObservedRunningTime="2025-11-25 08:25:50.796656305 +0000 UTC m=+884.505687100" Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.390779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" event={"ID":"6d9d0ad6-0976-4f14-81fb-f286f6768256","Type":"ContainerStarted","Data":"6e207003e9ae45ccb1d185f3779b5f1df6eddda369ea72f52d6ad2038552cbcf"} Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.391460 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.393673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" event={"ID":"0f496ee1-ca51-427f-a51d-4fc214c7f50a","Type":"ContainerStarted","Data":"4c25e20700be96ef60479e8ae592b3174f0611826b3ba3aefc1c35ce0702f23b"} Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.393781 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.395720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" event={"ID":"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129","Type":"ContainerStarted","Data":"7f65724c1e659b8bf1f9f8c2a0f1be8d9b7fb31714e576521311db7aad62ae0c"} Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.397501 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" event={"ID":"890067e5-2be8-4699-8d90-f2771ef453e5","Type":"ContainerStarted","Data":"373db6e0c2b67d0d63ddfbebfb084a2bfedd2006f38f42a6725dd6dfcadf172d"} Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.421977 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" podStartSLOduration=4.752387301 podStartE2EDuration="38.421959004s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.862143743 +0000 UTC m=+850.571174538" lastFinishedPulling="2025-11-25 08:25:50.531715436 +0000 UTC m=+884.240746241" observedRunningTime="2025-11-25 08:25:51.416325371 +0000 UTC m=+885.125356166" watchObservedRunningTime="2025-11-25 08:25:51.421959004 +0000 UTC m=+885.130989789" Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.523860 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" podStartSLOduration=3.124981312 podStartE2EDuration="37.523838008s" podCreationTimestamp="2025-11-25 08:25:14 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.278101772 +0000 UTC m=+849.987132567" lastFinishedPulling="2025-11-25 08:25:50.676958468 +0000 UTC m=+884.385989263" observedRunningTime="2025-11-25 08:25:51.521222202 +0000 UTC m=+885.230252997" watchObservedRunningTime="2025-11-25 08:25:51.523838008 +0000 UTC m=+885.232868823" Nov 25 08:25:51 crc kubenswrapper[4760]: I1125 08:25:51.526560 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" podStartSLOduration=4.044307978 podStartE2EDuration="38.526549476s" podCreationTimestamp="2025-11-25 08:25:13 +0000 UTC" firstStartedPulling="2025-11-25 08:25:16.044070632 +0000 UTC m=+849.753101427" lastFinishedPulling="2025-11-25 08:25:50.52631213 +0000 UTC m=+884.235342925" observedRunningTime="2025-11-25 08:25:51.485131283 +0000 UTC m=+885.194162088" watchObservedRunningTime="2025-11-25 08:25:51.526549476 +0000 UTC m=+885.235580281" Nov 25 08:25:52 crc kubenswrapper[4760]: I1125 08:25:52.406811 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 08:25:52 crc kubenswrapper[4760]: I1125 08:25:52.409102 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 08:25:52 crc kubenswrapper[4760]: I1125 08:25:52.409459 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 08:25:54 crc kubenswrapper[4760]: I1125 08:25:54.537049 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 08:25:54 crc kubenswrapper[4760]: I1125 08:25:54.656233 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 08:25:54 crc kubenswrapper[4760]: I1125 08:25:54.832087 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 08:25:58 crc kubenswrapper[4760]: I1125 08:25:58.709793 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 08:26:03 crc kubenswrapper[4760]: I1125 08:26:03.996274 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 08:26:04 crc kubenswrapper[4760]: I1125 08:26:04.875605 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 08:26:04 crc kubenswrapper[4760]: I1125 08:26:04.984951 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.140789 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-6vmmx"] Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.142890 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.145339 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.145517 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.145614 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w7hrm" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.145646 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.162325 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-6vmmx"] Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.224792 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc2d668-b156-4466-8797-a6d09912d8e6-config\") pod \"dnsmasq-dns-7bdd77c89-6vmmx\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.225043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6kg\" (UniqueName: \"kubernetes.io/projected/4fc2d668-b156-4466-8797-a6d09912d8e6-kube-api-access-9n6kg\") pod \"dnsmasq-dns-7bdd77c89-6vmmx\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.250819 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-m4m4b"] Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.252384 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.254461 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.263764 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-m4m4b"] Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.326273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6kg\" (UniqueName: \"kubernetes.io/projected/4fc2d668-b156-4466-8797-a6d09912d8e6-kube-api-access-9n6kg\") pod \"dnsmasq-dns-7bdd77c89-6vmmx\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.326582 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-dns-svc\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.326637 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc2d668-b156-4466-8797-a6d09912d8e6-config\") pod \"dnsmasq-dns-7bdd77c89-6vmmx\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.326659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-config\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.326679 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp79z\" (UniqueName: \"kubernetes.io/projected/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-kube-api-access-lp79z\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.327470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc2d668-b156-4466-8797-a6d09912d8e6-config\") pod \"dnsmasq-dns-7bdd77c89-6vmmx\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.345490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6kg\" (UniqueName: \"kubernetes.io/projected/4fc2d668-b156-4466-8797-a6d09912d8e6-kube-api-access-9n6kg\") pod \"dnsmasq-dns-7bdd77c89-6vmmx\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.428368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-dns-svc\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.428442 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-config\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.428465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp79z\" (UniqueName: \"kubernetes.io/projected/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-kube-api-access-lp79z\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.429401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-dns-svc\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.429461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-config\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.449941 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp79z\" (UniqueName: \"kubernetes.io/projected/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-kube-api-access-lp79z\") pod \"dnsmasq-dns-6584b49599-m4m4b\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.477466 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.568763 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.890217 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-6vmmx"] Nov 25 08:26:23 crc kubenswrapper[4760]: I1125 08:26:23.895629 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:26:24 crc kubenswrapper[4760]: I1125 08:26:24.015830 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-m4m4b"] Nov 25 08:26:24 crc kubenswrapper[4760]: W1125 08:26:24.019753 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5d3b64a_bb02_4715_bb8d_fbe6ea2e0a8d.slice/crio-18b043c25f7e1d62cede27e0d0905d463e71d63e0a537f910567a76ae7087404 WatchSource:0}: Error finding container 18b043c25f7e1d62cede27e0d0905d463e71d63e0a537f910567a76ae7087404: Status 404 returned error can't find the container with id 18b043c25f7e1d62cede27e0d0905d463e71d63e0a537f910567a76ae7087404 Nov 25 08:26:24 crc kubenswrapper[4760]: I1125 08:26:24.630536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-m4m4b" event={"ID":"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d","Type":"ContainerStarted","Data":"18b043c25f7e1d62cede27e0d0905d463e71d63e0a537f910567a76ae7087404"} Nov 25 08:26:24 crc kubenswrapper[4760]: I1125 08:26:24.634381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" event={"ID":"4fc2d668-b156-4466-8797-a6d09912d8e6","Type":"ContainerStarted","Data":"50c4cde4f80b954a343b045369aba2c756cf06662ba3153332d79cc2acf53723"} Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.489792 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-m4m4b"] Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.527508 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-m6qmw"] Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.537010 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.545635 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-m6qmw"] Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.587360 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9rhr\" (UniqueName: \"kubernetes.io/projected/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-kube-api-access-m9rhr\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.587724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-config\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.587789 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.691223 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9rhr\" (UniqueName: \"kubernetes.io/projected/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-kube-api-access-m9rhr\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.691338 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-config\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.691381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.692706 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-config\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.698232 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.718706 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9rhr\" (UniqueName: \"kubernetes.io/projected/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-kube-api-access-m9rhr\") pod \"dnsmasq-dns-7c6d9948dc-m6qmw\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.798147 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-6vmmx"] Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.841964 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-997jz"] Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.843608 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.856067 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-997jz"] Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.863993 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.895212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmlx\" (UniqueName: \"kubernetes.io/projected/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-kube-api-access-bnmlx\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.895651 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-dns-svc\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.895749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-config\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.997145 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmlx\" (UniqueName: \"kubernetes.io/projected/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-kube-api-access-bnmlx\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.997213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-dns-svc\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.997298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-config\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.998112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-config\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:26 crc kubenswrapper[4760]: I1125 08:26:26.998156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-dns-svc\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.031997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmlx\" (UniqueName: \"kubernetes.io/projected/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-kube-api-access-bnmlx\") pod \"dnsmasq-dns-6486446b9f-997jz\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.170036 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.660547 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.662084 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.664617 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.668025 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.668324 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.668440 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.668549 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.668654 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mgpb7" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.676880 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.692169 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809295 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-config-data\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809321 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809344 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-kube-api-access-q996c\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809581 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809626 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.809721 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912779 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912801 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912837 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912922 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-config-data\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.912987 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-kube-api-access-q996c\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.914763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-config-data\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.915408 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.915692 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.916239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.916656 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.916704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.919835 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.920902 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.921956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.924303 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.933014 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-kube-api-access-q996c\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.946353 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " pod="openstack/rabbitmq-server-0" Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.997146 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:26:27 crc kubenswrapper[4760]: I1125 08:26:27.999170 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.009526 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.009743 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.009888 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.010065 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mhr6s" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.010170 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.010224 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.010390 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.010521 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.049497 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1de21d0-f4de-4294-a1b0-ec1328f46531-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115617 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115770 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115877 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115943 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.115999 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1de21d0-f4de-4294-a1b0-ec1328f46531-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.116164 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.116278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.116333 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dl6\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-kube-api-access-k6dl6\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1de21d0-f4de-4294-a1b0-ec1328f46531-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217446 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217468 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217559 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1de21d0-f4de-4294-a1b0-ec1328f46531-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217649 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.217798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6dl6\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-kube-api-access-k6dl6\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.220415 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.222222 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.222863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.223482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.223624 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.224825 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1de21d0-f4de-4294-a1b0-ec1328f46531-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.228666 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.230890 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1de21d0-f4de-4294-a1b0-ec1328f46531-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.231409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.232942 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.237183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6dl6\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-kube-api-access-k6dl6\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.249576 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:28 crc kubenswrapper[4760]: I1125 08:26:28.330170 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.155695 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.157421 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.160396 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.160952 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9v7wz" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.161201 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.161225 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.177466 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.191119 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de9d3301-bdad-46bf-b7c2-4467cfd590dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232394 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9d3301-bdad-46bf-b7c2-4467cfd590dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232514 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ldg9\" (UniqueName: \"kubernetes.io/projected/de9d3301-bdad-46bf-b7c2-4467cfd590dd-kube-api-access-6ldg9\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9d3301-bdad-46bf-b7c2-4467cfd590dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.232632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.333937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9d3301-bdad-46bf-b7c2-4467cfd590dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.333989 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.334024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ldg9\" (UniqueName: \"kubernetes.io/projected/de9d3301-bdad-46bf-b7c2-4467cfd590dd-kube-api-access-6ldg9\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.334053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.334086 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9d3301-bdad-46bf-b7c2-4467cfd590dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.334111 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.334140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de9d3301-bdad-46bf-b7c2-4467cfd590dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.334179 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.334727 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.338348 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.338534 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9d3301-bdad-46bf-b7c2-4467cfd590dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.339047 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de9d3301-bdad-46bf-b7c2-4467cfd590dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.339132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.340701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9d3301-bdad-46bf-b7c2-4467cfd590dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.341769 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9d3301-bdad-46bf-b7c2-4467cfd590dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.359460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ldg9\" (UniqueName: \"kubernetes.io/projected/de9d3301-bdad-46bf-b7c2-4467cfd590dd-kube-api-access-6ldg9\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.370008 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"de9d3301-bdad-46bf-b7c2-4467cfd590dd\") " pod="openstack/openstack-galera-0" Nov 25 08:26:29 crc kubenswrapper[4760]: I1125 08:26:29.479116 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.431081 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.432438 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.434668 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tcbdm" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.434692 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.434927 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.434962 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.446429 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.550638 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17455e1c-2662-421d-ac93-ce773e1fd50a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.550707 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.550777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17455e1c-2662-421d-ac93-ce773e1fd50a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.550877 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpkr\" (UniqueName: \"kubernetes.io/projected/17455e1c-2662-421d-ac93-ce773e1fd50a-kube-api-access-tkpkr\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.550923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17455e1c-2662-421d-ac93-ce773e1fd50a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.550951 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.550988 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.551011 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpkr\" (UniqueName: \"kubernetes.io/projected/17455e1c-2662-421d-ac93-ce773e1fd50a-kube-api-access-tkpkr\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17455e1c-2662-421d-ac93-ce773e1fd50a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653766 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653816 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17455e1c-2662-421d-ac93-ce773e1fd50a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.653892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17455e1c-2662-421d-ac93-ce773e1fd50a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.654112 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.654724 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.656091 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17455e1c-2662-421d-ac93-ce773e1fd50a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.656949 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.657901 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17455e1c-2662-421d-ac93-ce773e1fd50a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.667680 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17455e1c-2662-421d-ac93-ce773e1fd50a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.673628 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17455e1c-2662-421d-ac93-ce773e1fd50a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.679084 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpkr\" (UniqueName: \"kubernetes.io/projected/17455e1c-2662-421d-ac93-ce773e1fd50a-kube-api-access-tkpkr\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.696023 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"17455e1c-2662-421d-ac93-ce773e1fd50a\") " pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.762070 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.837796 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.839511 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.841362 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.843790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.844179 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bf2wt" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.849602 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.965611 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1b32df7-1040-4d21-89cd-d5f772bd4014-config-data\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.965691 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1b32df7-1040-4d21-89cd-d5f772bd4014-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.965924 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms8fc\" (UniqueName: \"kubernetes.io/projected/f1b32df7-1040-4d21-89cd-d5f772bd4014-kube-api-access-ms8fc\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.966056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b32df7-1040-4d21-89cd-d5f772bd4014-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:30 crc kubenswrapper[4760]: I1125 08:26:30.966167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1b32df7-1040-4d21-89cd-d5f772bd4014-kolla-config\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.067139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1b32df7-1040-4d21-89cd-d5f772bd4014-config-data\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.067186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1b32df7-1040-4d21-89cd-d5f772bd4014-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.067573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms8fc\" (UniqueName: \"kubernetes.io/projected/f1b32df7-1040-4d21-89cd-d5f772bd4014-kube-api-access-ms8fc\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.067619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b32df7-1040-4d21-89cd-d5f772bd4014-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.067652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1b32df7-1040-4d21-89cd-d5f772bd4014-kolla-config\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.068098 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1b32df7-1040-4d21-89cd-d5f772bd4014-config-data\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.068312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1b32df7-1040-4d21-89cd-d5f772bd4014-kolla-config\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.072006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1b32df7-1040-4d21-89cd-d5f772bd4014-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.088365 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b32df7-1040-4d21-89cd-d5f772bd4014-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.094755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms8fc\" (UniqueName: \"kubernetes.io/projected/f1b32df7-1040-4d21-89cd-d5f772bd4014-kube-api-access-ms8fc\") pod \"memcached-0\" (UID: \"f1b32df7-1040-4d21-89cd-d5f772bd4014\") " pod="openstack/memcached-0" Nov 25 08:26:31 crc kubenswrapper[4760]: I1125 08:26:31.168152 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 08:26:32 crc kubenswrapper[4760]: I1125 08:26:32.708652 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:26:32 crc kubenswrapper[4760]: I1125 08:26:32.710188 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 08:26:32 crc kubenswrapper[4760]: I1125 08:26:32.713443 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jdhsk" Nov 25 08:26:32 crc kubenswrapper[4760]: I1125 08:26:32.722753 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:26:32 crc kubenswrapper[4760]: I1125 08:26:32.793529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwc4\" (UniqueName: \"kubernetes.io/projected/50f445d9-b3be-421d-b30a-89759c1ad2e8-kube-api-access-hxwc4\") pod \"kube-state-metrics-0\" (UID: \"50f445d9-b3be-421d-b30a-89759c1ad2e8\") " pod="openstack/kube-state-metrics-0" Nov 25 08:26:32 crc kubenswrapper[4760]: I1125 08:26:32.894938 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwc4\" (UniqueName: \"kubernetes.io/projected/50f445d9-b3be-421d-b30a-89759c1ad2e8-kube-api-access-hxwc4\") pod \"kube-state-metrics-0\" (UID: \"50f445d9-b3be-421d-b30a-89759c1ad2e8\") " pod="openstack/kube-state-metrics-0" Nov 25 08:26:32 crc kubenswrapper[4760]: I1125 08:26:32.915014 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwc4\" (UniqueName: \"kubernetes.io/projected/50f445d9-b3be-421d-b30a-89759c1ad2e8-kube-api-access-hxwc4\") pod \"kube-state-metrics-0\" (UID: \"50f445d9-b3be-421d-b30a-89759c1ad2e8\") " pod="openstack/kube-state-metrics-0" Nov 25 08:26:33 crc kubenswrapper[4760]: I1125 08:26:33.030547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.291024 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wtp5g"] Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.293726 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.296762 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.297117 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zvdcb" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.305695 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.308928 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wtp5g"] Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.329573 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kf25c"] Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.331924 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.337011 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kf25c"] Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.384187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-run-ovn\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.384271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b050dee-2005-4a2b-8550-6f5d055a86b6-combined-ca-bundle\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.384303 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-log-ovn\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.384326 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbd2b\" (UniqueName: \"kubernetes.io/projected/7b050dee-2005-4a2b-8550-6f5d055a86b6-kube-api-access-nbd2b\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.384382 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b050dee-2005-4a2b-8550-6f5d055a86b6-ovn-controller-tls-certs\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.384415 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b050dee-2005-4a2b-8550-6f5d055a86b6-scripts\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.384450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-run\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.485988 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-run\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8vnk\" (UniqueName: \"kubernetes.io/projected/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-kube-api-access-c8vnk\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-scripts\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-run-ovn\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b050dee-2005-4a2b-8550-6f5d055a86b6-combined-ca-bundle\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-log\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-log-ovn\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486407 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbd2b\" (UniqueName: \"kubernetes.io/projected/7b050dee-2005-4a2b-8550-6f5d055a86b6-kube-api-access-nbd2b\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486506 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b050dee-2005-4a2b-8550-6f5d055a86b6-ovn-controller-tls-certs\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486550 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b050dee-2005-4a2b-8550-6f5d055a86b6-scripts\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-etc-ovs\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-run\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-run-ovn\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-lib\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.486781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-run\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.487436 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b050dee-2005-4a2b-8550-6f5d055a86b6-var-log-ovn\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.489304 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b050dee-2005-4a2b-8550-6f5d055a86b6-scripts\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.494375 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b050dee-2005-4a2b-8550-6f5d055a86b6-ovn-controller-tls-certs\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.507309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbd2b\" (UniqueName: \"kubernetes.io/projected/7b050dee-2005-4a2b-8550-6f5d055a86b6-kube-api-access-nbd2b\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.508927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b050dee-2005-4a2b-8550-6f5d055a86b6-combined-ca-bundle\") pod \"ovn-controller-wtp5g\" (UID: \"7b050dee-2005-4a2b-8550-6f5d055a86b6\") " pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588111 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-lib\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-run\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588222 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8vnk\" (UniqueName: \"kubernetes.io/projected/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-kube-api-access-c8vnk\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-scripts\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588304 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-log\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-etc-ovs\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-run\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-lib\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-etc-ovs\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.588789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-var-log\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.590682 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-scripts\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.607041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8vnk\" (UniqueName: \"kubernetes.io/projected/d1ba8a40-f479-46dc-b509-a9c4d9c4670b-kube-api-access-c8vnk\") pod \"ovn-controller-ovs-kf25c\" (UID: \"d1ba8a40-f479-46dc-b509-a9c4d9c4670b\") " pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.628555 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:36 crc kubenswrapper[4760]: I1125 08:26:36.661159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.133605 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.142876 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.145318 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.145345 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.145593 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zj4q5" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.145726 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.145753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.149895 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.299728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gw7\" (UniqueName: \"kubernetes.io/projected/281d5fd5-dd87-4463-be57-4fd409cf4009-kube-api-access-j8gw7\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.299783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.300134 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/281d5fd5-dd87-4463-be57-4fd409cf4009-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.300212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/281d5fd5-dd87-4463-be57-4fd409cf4009-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.300482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.300524 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.300551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.300646 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281d5fd5-dd87-4463-be57-4fd409cf4009-config\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.402366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/281d5fd5-dd87-4463-be57-4fd409cf4009-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.402606 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.404045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/281d5fd5-dd87-4463-be57-4fd409cf4009-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.404383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.404943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.405521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281d5fd5-dd87-4463-be57-4fd409cf4009-config\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.405581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gw7\" (UniqueName: \"kubernetes.io/projected/281d5fd5-dd87-4463-be57-4fd409cf4009-kube-api-access-j8gw7\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.406417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.404752 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.406503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/281d5fd5-dd87-4463-be57-4fd409cf4009-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.406862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/281d5fd5-dd87-4463-be57-4fd409cf4009-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.406336 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281d5fd5-dd87-4463-be57-4fd409cf4009-config\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.410368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.413557 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.417424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/281d5fd5-dd87-4463-be57-4fd409cf4009-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.441209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gw7\" (UniqueName: \"kubernetes.io/projected/281d5fd5-dd87-4463-be57-4fd409cf4009-kube-api-access-j8gw7\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.456595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"281d5fd5-dd87-4463-be57-4fd409cf4009\") " pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:37 crc kubenswrapper[4760]: I1125 08:26:37.472396 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:39 crc kubenswrapper[4760]: E1125 08:26:39.796984 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 25 08:26:39 crc kubenswrapper[4760]: E1125 08:26:39.797694 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n6kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-6vmmx_openstack(4fc2d668-b156-4466-8797-a6d09912d8e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:26:39 crc kubenswrapper[4760]: E1125 08:26:39.798894 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" podUID="4fc2d668-b156-4466-8797-a6d09912d8e6" Nov 25 08:26:39 crc kubenswrapper[4760]: I1125 08:26:39.916781 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 08:26:39 crc kubenswrapper[4760]: I1125 08:26:39.918178 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:39 crc kubenswrapper[4760]: I1125 08:26:39.921999 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 08:26:39 crc kubenswrapper[4760]: I1125 08:26:39.922308 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mm2d8" Nov 25 08:26:39 crc kubenswrapper[4760]: I1125 08:26:39.923001 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 08:26:39 crc kubenswrapper[4760]: I1125 08:26:39.923468 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 08:26:39 crc kubenswrapper[4760]: I1125 08:26:39.991313 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073283 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1645e51-365a-4195-bb42-5641959bf77f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073406 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpp89\" (UniqueName: \"kubernetes.io/projected/c1645e51-365a-4195-bb42-5641959bf77f-kube-api-access-dpp89\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073535 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1645e51-365a-4195-bb42-5641959bf77f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1645e51-365a-4195-bb42-5641959bf77f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.073722 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: E1125 08:26:40.089978 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 25 08:26:40 crc kubenswrapper[4760]: E1125 08:26:40.090139 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp79z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-m4m4b_openstack(f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:26:40 crc kubenswrapper[4760]: E1125 08:26:40.091463 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-m4m4b" podUID="f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175324 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpp89\" (UniqueName: \"kubernetes.io/projected/c1645e51-365a-4195-bb42-5641959bf77f-kube-api-access-dpp89\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175414 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1645e51-365a-4195-bb42-5641959bf77f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1645e51-365a-4195-bb42-5641959bf77f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175504 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.175529 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1645e51-365a-4195-bb42-5641959bf77f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.176607 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.176861 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1645e51-365a-4195-bb42-5641959bf77f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.177177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1645e51-365a-4195-bb42-5641959bf77f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.182441 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.184719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.190314 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1645e51-365a-4195-bb42-5641959bf77f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.193816 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1645e51-365a-4195-bb42-5641959bf77f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.194230 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpp89\" (UniqueName: \"kubernetes.io/projected/c1645e51-365a-4195-bb42-5641959bf77f-kube-api-access-dpp89\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.204765 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c1645e51-365a-4195-bb42-5641959bf77f\") " pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.206375 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-997jz"] Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.211218 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: W1125 08:26:40.212610 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f4df0a4_e5ad_47f2_a8e9_44a800f24a2d.slice/crio-691b100e8e6f8fe5e823a797a22ad1951f8a75ca1102491ff81d1c6336ead85c WatchSource:0}: Error finding container 691b100e8e6f8fe5e823a797a22ad1951f8a75ca1102491ff81d1c6336ead85c: Status 404 returned error can't find the container with id 691b100e8e6f8fe5e823a797a22ad1951f8a75ca1102491ff81d1c6336ead85c Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.266188 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.399040 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: W1125 08:26:40.404109 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1b32df7_1040_4d21_89cd_d5f772bd4014.slice/crio-8b1cf3b5773c83b06dd01e0b2e83a01b589bf00978194bfb63a782be2299bd21 WatchSource:0}: Error finding container 8b1cf3b5773c83b06dd01e0b2e83a01b589bf00978194bfb63a782be2299bd21: Status 404 returned error can't find the container with id 8b1cf3b5773c83b06dd01e0b2e83a01b589bf00978194bfb63a782be2299bd21 Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.423387 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: W1125 08:26:40.424121 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1de21d0_f4de_4294_a1b0_ec1328f46531.slice/crio-72b0f80d3920b470a033103c26b36a33d50cf57658bee19acb2b6e1deb131c00 WatchSource:0}: Error finding container 72b0f80d3920b470a033103c26b36a33d50cf57658bee19acb2b6e1deb131c00: Status 404 returned error can't find the container with id 72b0f80d3920b470a033103c26b36a33d50cf57658bee19acb2b6e1deb131c00 Nov 25 08:26:40 crc kubenswrapper[4760]: W1125 08:26:40.524887 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f445d9_b3be_421d_b30a_89759c1ad2e8.slice/crio-3fb6dd4a0a3e7b3c8ca0bb7742f1448ac2df466ab85aff728dd32d652d5c8655 WatchSource:0}: Error finding container 3fb6dd4a0a3e7b3c8ca0bb7742f1448ac2df466ab85aff728dd32d652d5c8655: Status 404 returned error can't find the container with id 3fb6dd4a0a3e7b3c8ca0bb7742f1448ac2df466ab85aff728dd32d652d5c8655 Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.534137 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: W1125 08:26:40.537571 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a525b3_ee1d_47e0_97fe_49bbcb09f3dd.slice/crio-ab4664c8543e8c78632273c7177e48c13ce29d53032a7b635d686f4c69234358 WatchSource:0}: Error finding container ab4664c8543e8c78632273c7177e48c13ce29d53032a7b635d686f4c69234358: Status 404 returned error can't find the container with id ab4664c8543e8c78632273c7177e48c13ce29d53032a7b635d686f4c69234358 Nov 25 08:26:40 crc kubenswrapper[4760]: W1125 08:26:40.539067 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde9d3301_bdad_46bf_b7c2_4467cfd590dd.slice/crio-051d50963bcf2510de3b9b7e5a3dd1a6ae85c7db644e0a7d66ac4a839cf5e525 WatchSource:0}: Error finding container 051d50963bcf2510de3b9b7e5a3dd1a6ae85c7db644e0a7d66ac4a839cf5e525: Status 404 returned error can't find the container with id 051d50963bcf2510de3b9b7e5a3dd1a6ae85c7db644e0a7d66ac4a839cf5e525 Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.540072 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-m6qmw"] Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.551707 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.558930 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.564979 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wtp5g"] Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.769576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-997jz" event={"ID":"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496","Type":"ContainerStarted","Data":"7d227f03b03173ff6a8b802104a750d881a0e140b21d421fbc9033501caa5070"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.771208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g" event={"ID":"7b050dee-2005-4a2b-8550-6f5d055a86b6","Type":"ContainerStarted","Data":"c0a9058d18d13ef8ca206d1bdaed00d7d6c9d3cf7b491e1bbf3cbc3aeeb2ba60"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.774799 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de9d3301-bdad-46bf-b7c2-4467cfd590dd","Type":"ContainerStarted","Data":"051d50963bcf2510de3b9b7e5a3dd1a6ae85c7db644e0a7d66ac4a839cf5e525"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.776298 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" event={"ID":"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd","Type":"ContainerStarted","Data":"ab4664c8543e8c78632273c7177e48c13ce29d53032a7b635d686f4c69234358"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.777695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"50f445d9-b3be-421d-b30a-89759c1ad2e8","Type":"ContainerStarted","Data":"3fb6dd4a0a3e7b3c8ca0bb7742f1448ac2df466ab85aff728dd32d652d5c8655"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.779055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f1b32df7-1040-4d21-89cd-d5f772bd4014","Type":"ContainerStarted","Data":"8b1cf3b5773c83b06dd01e0b2e83a01b589bf00978194bfb63a782be2299bd21"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.781050 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d","Type":"ContainerStarted","Data":"691b100e8e6f8fe5e823a797a22ad1951f8a75ca1102491ff81d1c6336ead85c"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.783240 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1de21d0-f4de-4294-a1b0-ec1328f46531","Type":"ContainerStarted","Data":"72b0f80d3920b470a033103c26b36a33d50cf57658bee19acb2b6e1deb131c00"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.784562 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"17455e1c-2662-421d-ac93-ce773e1fd50a","Type":"ContainerStarted","Data":"2edc590eae3219e1d44803d5cb58da3fe3042c86bc6d96276740a45a58e08517"} Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.903898 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: I1125 08:26:40.962564 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 08:26:40 crc kubenswrapper[4760]: W1125 08:26:40.976526 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1645e51_365a_4195_bb42_5641959bf77f.slice/crio-deb448a14cbd453429d09dda69454ca63bf9b3abb2bce09e4c6ae10ada10e142 WatchSource:0}: Error finding container deb448a14cbd453429d09dda69454ca63bf9b3abb2bce09e4c6ae10ada10e142: Status 404 returned error can't find the container with id deb448a14cbd453429d09dda69454ca63bf9b3abb2bce09e4c6ae10ada10e142 Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.316566 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.328841 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.403312 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-config\") pod \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.403411 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6kg\" (UniqueName: \"kubernetes.io/projected/4fc2d668-b156-4466-8797-a6d09912d8e6-kube-api-access-9n6kg\") pod \"4fc2d668-b156-4466-8797-a6d09912d8e6\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.403478 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp79z\" (UniqueName: \"kubernetes.io/projected/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-kube-api-access-lp79z\") pod \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.403499 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc2d668-b156-4466-8797-a6d09912d8e6-config\") pod \"4fc2d668-b156-4466-8797-a6d09912d8e6\" (UID: \"4fc2d668-b156-4466-8797-a6d09912d8e6\") " Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.403531 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-dns-svc\") pod \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\" (UID: \"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d\") " Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.403827 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-config" (OuterVolumeSpecName: "config") pod "f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d" (UID: "f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.404309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc2d668-b156-4466-8797-a6d09912d8e6-config" (OuterVolumeSpecName: "config") pod "4fc2d668-b156-4466-8797-a6d09912d8e6" (UID: "4fc2d668-b156-4466-8797-a6d09912d8e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.404652 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d" (UID: "f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.424935 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-kube-api-access-lp79z" (OuterVolumeSpecName: "kube-api-access-lp79z") pod "f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d" (UID: "f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d"). InnerVolumeSpecName "kube-api-access-lp79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.425058 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc2d668-b156-4466-8797-a6d09912d8e6-kube-api-access-9n6kg" (OuterVolumeSpecName: "kube-api-access-9n6kg") pod "4fc2d668-b156-4466-8797-a6d09912d8e6" (UID: "4fc2d668-b156-4466-8797-a6d09912d8e6"). InnerVolumeSpecName "kube-api-access-9n6kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.451412 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kf25c"] Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.505833 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp79z\" (UniqueName: \"kubernetes.io/projected/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-kube-api-access-lp79z\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.505867 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc2d668-b156-4466-8797-a6d09912d8e6-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.505877 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.505886 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.505896 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6kg\" (UniqueName: \"kubernetes.io/projected/4fc2d668-b156-4466-8797-a6d09912d8e6-kube-api-access-9n6kg\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.811174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-m4m4b" event={"ID":"f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d","Type":"ContainerDied","Data":"18b043c25f7e1d62cede27e0d0905d463e71d63e0a537f910567a76ae7087404"} Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.811436 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-m4m4b" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.815231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c1645e51-365a-4195-bb42-5641959bf77f","Type":"ContainerStarted","Data":"deb448a14cbd453429d09dda69454ca63bf9b3abb2bce09e4c6ae10ada10e142"} Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.818572 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" event={"ID":"4fc2d668-b156-4466-8797-a6d09912d8e6","Type":"ContainerDied","Data":"50c4cde4f80b954a343b045369aba2c756cf06662ba3153332d79cc2acf53723"} Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.818590 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-6vmmx" Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.820835 4760 generic.go:334] "Generic (PLEG): container finished" podID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerID="680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb" exitCode=0 Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.820880 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-997jz" event={"ID":"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496","Type":"ContainerDied","Data":"680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb"} Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.822615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"281d5fd5-dd87-4463-be57-4fd409cf4009","Type":"ContainerStarted","Data":"275e68686ddf67f4f8c883d4d54f358346adcbdf172c66252fa710f6a19184aa"} Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.873603 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-m4m4b"] Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.893355 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-m4m4b"] Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.924367 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-6vmmx"] Nov 25 08:26:41 crc kubenswrapper[4760]: I1125 08:26:41.934683 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-6vmmx"] Nov 25 08:26:42 crc kubenswrapper[4760]: I1125 08:26:42.835609 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kf25c" event={"ID":"d1ba8a40-f479-46dc-b509-a9c4d9c4670b","Type":"ContainerStarted","Data":"d3d34263a00af5f905e25faed95fdd0372204ae02f165c0e4699bdbba985c7bb"} Nov 25 08:26:42 crc kubenswrapper[4760]: I1125 08:26:42.948443 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc2d668-b156-4466-8797-a6d09912d8e6" path="/var/lib/kubelet/pods/4fc2d668-b156-4466-8797-a6d09912d8e6/volumes" Nov 25 08:26:42 crc kubenswrapper[4760]: I1125 08:26:42.948803 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d" path="/var/lib/kubelet/pods/f5d3b64a-bb02-4715-bb8d-fbe6ea2e0a8d/volumes" Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.895698 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"281d5fd5-dd87-4463-be57-4fd409cf4009","Type":"ContainerStarted","Data":"85f2e718bd2157187b3c8a26e58de4ceac21e217d22159925fc36549eaecd9e1"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.898810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c1645e51-365a-4195-bb42-5641959bf77f","Type":"ContainerStarted","Data":"c3c9c7aa35fc9c1e886c925ac563f84be1a4d690a31040cbac666586dd850136"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.901644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f1b32df7-1040-4d21-89cd-d5f772bd4014","Type":"ContainerStarted","Data":"eb34eb1c008148449e38ce471c6140e94987dd37ea610776b056b6f99147d2f5"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.901808 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.907159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-997jz" event={"ID":"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496","Type":"ContainerStarted","Data":"87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.907527 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.909388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g" event={"ID":"7b050dee-2005-4a2b-8550-6f5d055a86b6","Type":"ContainerStarted","Data":"b47f544786f6bba1bdd82df1ca9ba3b4794ff6e6984b6dd0fc309c966d08a4a0"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.909687 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wtp5g" Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.911095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de9d3301-bdad-46bf-b7c2-4467cfd590dd","Type":"ContainerStarted","Data":"fbb26d34e19b42dc94c8efb974095d2e75e6a0597e2bc306f17296d5a8a10bc3"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.916697 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerID="8ef8959a6b71407b24357b1785d053238c8847e811d4c19155fd3b8fed672df9" exitCode=0 Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.916971 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" event={"ID":"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd","Type":"ContainerDied","Data":"8ef8959a6b71407b24357b1785d053238c8847e811d4c19155fd3b8fed672df9"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.920054 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"50f445d9-b3be-421d-b30a-89759c1ad2e8","Type":"ContainerStarted","Data":"00c7ebe103517c8eb5440b169de110b1f929b8edd695ff408bf0803c5d8e40f1"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.928561 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1ba8a40-f479-46dc-b509-a9c4d9c4670b" containerID="6cf182f4e930ee4bf119d42097a35abdd2d5716ff6030a2c47986ab5de01f8c4" exitCode=0 Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.931307 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kf25c" event={"ID":"d1ba8a40-f479-46dc-b509-a9c4d9c4670b","Type":"ContainerDied","Data":"6cf182f4e930ee4bf119d42097a35abdd2d5716ff6030a2c47986ab5de01f8c4"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.933529 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.96298146 podStartE2EDuration="19.933517181s" podCreationTimestamp="2025-11-25 08:26:30 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.406513953 +0000 UTC m=+934.115544748" lastFinishedPulling="2025-11-25 08:26:48.377049674 +0000 UTC m=+942.086080469" observedRunningTime="2025-11-25 08:26:49.926035245 +0000 UTC m=+943.635066040" watchObservedRunningTime="2025-11-25 08:26:49.933517181 +0000 UTC m=+943.642547976" Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.938194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"17455e1c-2662-421d-ac93-ce773e1fd50a","Type":"ContainerStarted","Data":"26697c74e565334a3dd7e61539a481448cf107b5fc642be14a60f4eb30f211ec"} Nov 25 08:26:49 crc kubenswrapper[4760]: I1125 08:26:49.951576 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6486446b9f-997jz" podStartSLOduration=23.423694498 podStartE2EDuration="23.95155036s" podCreationTimestamp="2025-11-25 08:26:26 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.213445173 +0000 UTC m=+933.922475958" lastFinishedPulling="2025-11-25 08:26:40.741301025 +0000 UTC m=+934.450331820" observedRunningTime="2025-11-25 08:26:49.94946953 +0000 UTC m=+943.658500335" watchObservedRunningTime="2025-11-25 08:26:49.95155036 +0000 UTC m=+943.660581155" Nov 25 08:26:50 crc kubenswrapper[4760]: I1125 08:26:50.003052 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wtp5g" podStartSLOduration=5.766977505 podStartE2EDuration="14.003034613s" podCreationTimestamp="2025-11-25 08:26:36 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.546600938 +0000 UTC m=+934.255631733" lastFinishedPulling="2025-11-25 08:26:48.782658056 +0000 UTC m=+942.491688841" observedRunningTime="2025-11-25 08:26:50.001300153 +0000 UTC m=+943.710330958" watchObservedRunningTime="2025-11-25 08:26:50.003034613 +0000 UTC m=+943.712065408" Nov 25 08:26:50 crc kubenswrapper[4760]: I1125 08:26:50.025097 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.679449824 podStartE2EDuration="18.025070397s" podCreationTimestamp="2025-11-25 08:26:32 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.527304482 +0000 UTC m=+934.236335277" lastFinishedPulling="2025-11-25 08:26:48.872925055 +0000 UTC m=+942.581955850" observedRunningTime="2025-11-25 08:26:50.023829592 +0000 UTC m=+943.732860387" watchObservedRunningTime="2025-11-25 08:26:50.025070397 +0000 UTC m=+943.734101182" Nov 25 08:26:50 crc kubenswrapper[4760]: E1125 08:26:50.261302 4760 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 25 08:26:50 crc kubenswrapper[4760]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 25 08:26:50 crc kubenswrapper[4760]: > podSandboxID="ab4664c8543e8c78632273c7177e48c13ce29d53032a7b635d686f4c69234358" Nov 25 08:26:50 crc kubenswrapper[4760]: E1125 08:26:50.261454 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 25 08:26:50 crc kubenswrapper[4760]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9rhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c6d9948dc-m6qmw_openstack(f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 25 08:26:50 crc kubenswrapper[4760]: > logger="UnhandledError" Nov 25 08:26:50 crc kubenswrapper[4760]: E1125 08:26:50.263288 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" Nov 25 08:26:50 crc kubenswrapper[4760]: I1125 08:26:50.950819 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kf25c" event={"ID":"d1ba8a40-f479-46dc-b509-a9c4d9c4670b","Type":"ContainerStarted","Data":"39d02dc356b2af48124a9827bd34db0afcd64b586d9dd72d250650affdd9941d"} Nov 25 08:26:50 crc kubenswrapper[4760]: I1125 08:26:50.953067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d","Type":"ContainerStarted","Data":"c9b801485c25de17cda2dabe57e1991d03968731843b911e0241cbab2acadee2"} Nov 25 08:26:50 crc kubenswrapper[4760]: I1125 08:26:50.954675 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1de21d0-f4de-4294-a1b0-ec1328f46531","Type":"ContainerStarted","Data":"e0f65cbf20b69fcac39954194d3b9cfcddfcddfc66fab1a7b56132d9e8e38deb"} Nov 25 08:26:50 crc kubenswrapper[4760]: I1125 08:26:50.955620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 08:26:51 crc kubenswrapper[4760]: I1125 08:26:51.965409 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kf25c" event={"ID":"d1ba8a40-f479-46dc-b509-a9c4d9c4670b","Type":"ContainerStarted","Data":"efaa2fed1d9d4dd61b047ebeffea1ca729c68e5d92eafd74e4990249cfa44f53"} Nov 25 08:26:52 crc kubenswrapper[4760]: I1125 08:26:52.979639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" event={"ID":"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd","Type":"ContainerStarted","Data":"ff9827046bf336d4a9d05ce398b7e24b1d56cb7ee86113f38eb2a30cb0ac6bd9"} Nov 25 08:26:52 crc kubenswrapper[4760]: I1125 08:26:52.979972 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:52 crc kubenswrapper[4760]: I1125 08:26:52.980607 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:26:52 crc kubenswrapper[4760]: I1125 08:26:52.980645 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:53 crc kubenswrapper[4760]: I1125 08:26:53.005886 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kf25c" podStartSLOduration=10.534375305 podStartE2EDuration="17.005866514s" podCreationTimestamp="2025-11-25 08:26:36 +0000 UTC" firstStartedPulling="2025-11-25 08:26:41.903960439 +0000 UTC m=+935.612991234" lastFinishedPulling="2025-11-25 08:26:48.375451648 +0000 UTC m=+942.084482443" observedRunningTime="2025-11-25 08:26:53.000920521 +0000 UTC m=+946.709951316" watchObservedRunningTime="2025-11-25 08:26:53.005866514 +0000 UTC m=+946.714897309" Nov 25 08:26:53 crc kubenswrapper[4760]: I1125 08:26:53.025395 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" podStartSLOduration=20.072095471 podStartE2EDuration="27.025346875s" podCreationTimestamp="2025-11-25 08:26:26 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.540610675 +0000 UTC m=+934.249641470" lastFinishedPulling="2025-11-25 08:26:47.493862079 +0000 UTC m=+941.202892874" observedRunningTime="2025-11-25 08:26:53.023640946 +0000 UTC m=+946.732671741" watchObservedRunningTime="2025-11-25 08:26:53.025346875 +0000 UTC m=+946.734377670" Nov 25 08:26:54 crc kubenswrapper[4760]: I1125 08:26:54.994003 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"281d5fd5-dd87-4463-be57-4fd409cf4009","Type":"ContainerStarted","Data":"d683393b64dab495fb4f440b0b40d241c6dfd27cbe111f17b862a7ac63f85516"} Nov 25 08:26:54 crc kubenswrapper[4760]: I1125 08:26:54.996661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c1645e51-365a-4195-bb42-5641959bf77f","Type":"ContainerStarted","Data":"ab8b836a211e7b03d9b5398f4816ea63dbf2f4a97fbbdc7acb667c471e252349"} Nov 25 08:26:55 crc kubenswrapper[4760]: I1125 08:26:55.021607 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.527406514 podStartE2EDuration="19.021587007s" podCreationTimestamp="2025-11-25 08:26:36 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.91338279 +0000 UTC m=+934.622413585" lastFinishedPulling="2025-11-25 08:26:54.407563283 +0000 UTC m=+948.116594078" observedRunningTime="2025-11-25 08:26:55.011161497 +0000 UTC m=+948.720192292" watchObservedRunningTime="2025-11-25 08:26:55.021587007 +0000 UTC m=+948.730617802" Nov 25 08:26:55 crc kubenswrapper[4760]: I1125 08:26:55.033373 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.596689939 podStartE2EDuration="17.033354176s" podCreationTimestamp="2025-11-25 08:26:38 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.979629648 +0000 UTC m=+934.688660443" lastFinishedPulling="2025-11-25 08:26:54.416293885 +0000 UTC m=+948.125324680" observedRunningTime="2025-11-25 08:26:55.032602424 +0000 UTC m=+948.741633259" watchObservedRunningTime="2025-11-25 08:26:55.033354176 +0000 UTC m=+948.742384971" Nov 25 08:26:55 crc kubenswrapper[4760]: I1125 08:26:55.267504 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:55 crc kubenswrapper[4760]: I1125 08:26:55.267670 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:55 crc kubenswrapper[4760]: I1125 08:26:55.315706 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:55 crc kubenswrapper[4760]: I1125 08:26:55.473550 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:55 crc kubenswrapper[4760]: I1125 08:26:55.510786 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.005177 4760 generic.go:334] "Generic (PLEG): container finished" podID="de9d3301-bdad-46bf-b7c2-4467cfd590dd" containerID="fbb26d34e19b42dc94c8efb974095d2e75e6a0597e2bc306f17296d5a8a10bc3" exitCode=0 Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.005322 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de9d3301-bdad-46bf-b7c2-4467cfd590dd","Type":"ContainerDied","Data":"fbb26d34e19b42dc94c8efb974095d2e75e6a0597e2bc306f17296d5a8a10bc3"} Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.007201 4760 generic.go:334] "Generic (PLEG): container finished" podID="17455e1c-2662-421d-ac93-ce773e1fd50a" containerID="26697c74e565334a3dd7e61539a481448cf107b5fc642be14a60f4eb30f211ec" exitCode=0 Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.007391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"17455e1c-2662-421d-ac93-ce773e1fd50a","Type":"ContainerDied","Data":"26697c74e565334a3dd7e61539a481448cf107b5fc642be14a60f4eb30f211ec"} Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.007746 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.050497 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.060988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.174324 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.217943 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-997jz"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.218151 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6486446b9f-997jz" podUID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerName="dnsmasq-dns" containerID="cri-o://87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6" gracePeriod=10 Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.229735 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.321033 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-7sfd5"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.322411 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.325347 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.356162 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-7sfd5"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.364345 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fgpnw"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.365929 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.370007 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.379020 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fgpnw"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.476018 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.477607 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.480745 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.481301 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.481495 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.482745 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-dns-svc\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.482917 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/68c768c5-3e1e-41a8-af21-c886ea5959a3-ovs-rundir\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.482990 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c768c5-3e1e-41a8-af21-c886ea5959a3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483090 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c768c5-3e1e-41a8-af21-c886ea5959a3-config\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483135 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqwk2\" (UniqueName: \"kubernetes.io/projected/68c768c5-3e1e-41a8-af21-c886ea5959a3-kube-api-access-fqwk2\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483216 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-config\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-ovsdbserver-sb\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483533 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/68c768c5-3e1e-41a8-af21-c886ea5959a3-ovn-rundir\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjj56\" (UniqueName: \"kubernetes.io/projected/9756cb1d-9720-42d5-aa31-0a56e966c73f-kube-api-access-tjj56\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483627 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-prz7t" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.483669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c768c5-3e1e-41a8-af21-c886ea5959a3-combined-ca-bundle\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.513453 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.531627 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-m6qmw"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.531941 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerName="dnsmasq-dns" containerID="cri-o://ff9827046bf336d4a9d05ce398b7e24b1d56cb7ee86113f38eb2a30cb0ac6bd9" gracePeriod=10 Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.563194 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-jhwc6"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.564564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.568063 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.573667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-jhwc6"] Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c768c5-3e1e-41a8-af21-c886ea5959a3-config\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqwk2\" (UniqueName: \"kubernetes.io/projected/68c768c5-3e1e-41a8-af21-c886ea5959a3-kube-api-access-fqwk2\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-config\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585659 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-ovsdbserver-sb\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/68c768c5-3e1e-41a8-af21-c886ea5959a3-ovn-rundir\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjj56\" (UniqueName: \"kubernetes.io/projected/9756cb1d-9720-42d5-aa31-0a56e966c73f-kube-api-access-tjj56\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgrdv\" (UniqueName: \"kubernetes.io/projected/22e32299-69a7-4572-8ff1-1d2d409d5137-kube-api-access-bgrdv\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c768c5-3e1e-41a8-af21-c886ea5959a3-combined-ca-bundle\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585800 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e32299-69a7-4572-8ff1-1d2d409d5137-scripts\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585817 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585878 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22e32299-69a7-4572-8ff1-1d2d409d5137-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585895 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e32299-69a7-4572-8ff1-1d2d409d5137-config\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585920 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-dns-svc\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/68c768c5-3e1e-41a8-af21-c886ea5959a3-ovs-rundir\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.585959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c768c5-3e1e-41a8-af21-c886ea5959a3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.588010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c768c5-3e1e-41a8-af21-c886ea5959a3-config\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.588123 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-ovsdbserver-sb\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.588819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-dns-svc\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.589489 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/68c768c5-3e1e-41a8-af21-c886ea5959a3-ovs-rundir\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.589917 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/68c768c5-3e1e-41a8-af21-c886ea5959a3-ovn-rundir\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.590133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-config\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.595033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c768c5-3e1e-41a8-af21-c886ea5959a3-combined-ca-bundle\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.597803 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68c768c5-3e1e-41a8-af21-c886ea5959a3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.611236 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqwk2\" (UniqueName: \"kubernetes.io/projected/68c768c5-3e1e-41a8-af21-c886ea5959a3-kube-api-access-fqwk2\") pod \"ovn-controller-metrics-fgpnw\" (UID: \"68c768c5-3e1e-41a8-af21-c886ea5959a3\") " pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.614081 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjj56\" (UniqueName: \"kubernetes.io/projected/9756cb1d-9720-42d5-aa31-0a56e966c73f-kube-api-access-tjj56\") pod \"dnsmasq-dns-65c9b8d4f7-7sfd5\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.689755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e32299-69a7-4572-8ff1-1d2d409d5137-config\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.693523 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.693639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.693724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9sb\" (UniqueName: \"kubernetes.io/projected/16c68abf-1eb4-4516-a83d-0ca72287b9fd-kube-api-access-dr9sb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.693846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.694352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.694470 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-config\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.694530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgrdv\" (UniqueName: \"kubernetes.io/projected/22e32299-69a7-4572-8ff1-1d2d409d5137-kube-api-access-bgrdv\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.694600 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e32299-69a7-4572-8ff1-1d2d409d5137-scripts\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.694632 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.695298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.695341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22e32299-69a7-4572-8ff1-1d2d409d5137-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.696715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22e32299-69a7-4572-8ff1-1d2d409d5137-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.691991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e32299-69a7-4572-8ff1-1d2d409d5137-config\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.702582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e32299-69a7-4572-8ff1-1d2d409d5137-scripts\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.712037 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.712934 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.716616 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.717031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e32299-69a7-4572-8ff1-1d2d409d5137-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.726078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgrdv\" (UniqueName: \"kubernetes.io/projected/22e32299-69a7-4572-8ff1-1d2d409d5137-kube-api-access-bgrdv\") pod \"ovn-northd-0\" (UID: \"22e32299-69a7-4572-8ff1-1d2d409d5137\") " pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.728170 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fgpnw" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.795097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.796364 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.796433 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.796471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9sb\" (UniqueName: \"kubernetes.io/projected/16c68abf-1eb4-4516-a83d-0ca72287b9fd-kube-api-access-dr9sb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.796511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.796535 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-config\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.797313 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.797343 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-config\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.797999 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.798317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.818043 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9sb\" (UniqueName: \"kubernetes.io/projected/16c68abf-1eb4-4516-a83d-0ca72287b9fd-kube-api-access-dr9sb\") pod \"dnsmasq-dns-5c476d78c5-jhwc6\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.823239 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:56 crc kubenswrapper[4760]: I1125 08:26:56.926603 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:56.999630 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-dns-svc\") pod \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.000017 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-config\") pod \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.000053 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnmlx\" (UniqueName: \"kubernetes.io/projected/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-kube-api-access-bnmlx\") pod \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\" (UID: \"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496\") " Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.005641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-kube-api-access-bnmlx" (OuterVolumeSpecName: "kube-api-access-bnmlx") pod "b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" (UID: "b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496"). InnerVolumeSpecName "kube-api-access-bnmlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.037688 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de9d3301-bdad-46bf-b7c2-4467cfd590dd","Type":"ContainerStarted","Data":"ad6bbcd755f21d8c0e902ac18575551d907975e8ffaaf48273752bb7aefec522"} Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.043365 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerID="ff9827046bf336d4a9d05ce398b7e24b1d56cb7ee86113f38eb2a30cb0ac6bd9" exitCode=0 Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.043460 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" event={"ID":"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd","Type":"ContainerDied","Data":"ff9827046bf336d4a9d05ce398b7e24b1d56cb7ee86113f38eb2a30cb0ac6bd9"} Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.046669 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"17455e1c-2662-421d-ac93-ce773e1fd50a","Type":"ContainerStarted","Data":"cb01794411f40f13dd5c425dc0088a896e22f5440eb1519d10e5807b362fc013"} Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.059299 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-config" (OuterVolumeSpecName: "config") pod "b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" (UID: "b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.059993 4760 generic.go:334] "Generic (PLEG): container finished" podID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerID="87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6" exitCode=0 Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.061064 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-997jz" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.061228 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-997jz" event={"ID":"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496","Type":"ContainerDied","Data":"87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6"} Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.061277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-997jz" event={"ID":"b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496","Type":"ContainerDied","Data":"7d227f03b03173ff6a8b802104a750d881a0e140b21d421fbc9033501caa5070"} Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.061295 4760 scope.go:117] "RemoveContainer" containerID="87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.075254 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" (UID: "b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.076909 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.242761916 podStartE2EDuration="29.076891259s" podCreationTimestamp="2025-11-25 08:26:28 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.541451939 +0000 UTC m=+934.250482734" lastFinishedPulling="2025-11-25 08:26:48.375581282 +0000 UTC m=+942.084612077" observedRunningTime="2025-11-25 08:26:57.072852203 +0000 UTC m=+950.781883038" watchObservedRunningTime="2025-11-25 08:26:57.076891259 +0000 UTC m=+950.785922054" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.104118 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.104158 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.104170 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnmlx\" (UniqueName: \"kubernetes.io/projected/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496-kube-api-access-bnmlx\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.111867 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.873479522 podStartE2EDuration="28.111844656s" podCreationTimestamp="2025-11-25 08:26:29 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.544316862 +0000 UTC m=+934.253347657" lastFinishedPulling="2025-11-25 08:26:48.782681996 +0000 UTC m=+942.491712791" observedRunningTime="2025-11-25 08:26:57.109055586 +0000 UTC m=+950.818086381" watchObservedRunningTime="2025-11-25 08:26:57.111844656 +0000 UTC m=+950.820875451" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.182795 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.209427 4760 scope.go:117] "RemoveContainer" containerID="680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.212465 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9rhr\" (UniqueName: \"kubernetes.io/projected/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-kube-api-access-m9rhr\") pod \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.212614 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-dns-svc\") pod \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.212639 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-config\") pod \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\" (UID: \"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd\") " Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.235305 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-kube-api-access-m9rhr" (OuterVolumeSpecName: "kube-api-access-m9rhr") pod "f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" (UID: "f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd"). InnerVolumeSpecName "kube-api-access-m9rhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.310132 4760 scope.go:117] "RemoveContainer" containerID="87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6" Nov 25 08:26:57 crc kubenswrapper[4760]: E1125 08:26:57.313208 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6\": container with ID starting with 87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6 not found: ID does not exist" containerID="87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.313284 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6"} err="failed to get container status \"87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6\": rpc error: code = NotFound desc = could not find container \"87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6\": container with ID starting with 87ffeed6e94bc48856d031a89b94262b5d7460f672f0f1a28116bd83eba645e6 not found: ID does not exist" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.313316 4760 scope.go:117] "RemoveContainer" containerID="680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.316229 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9rhr\" (UniqueName: \"kubernetes.io/projected/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-kube-api-access-m9rhr\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:57 crc kubenswrapper[4760]: E1125 08:26:57.319747 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb\": container with ID starting with 680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb not found: ID does not exist" containerID="680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.323445 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb"} err="failed to get container status \"680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb\": rpc error: code = NotFound desc = could not find container \"680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb\": container with ID starting with 680d4a8bea6211c5b0428a684437e0609ce82f144ce193d982409ba0efc90acb not found: ID does not exist" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.379365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-config" (OuterVolumeSpecName: "config") pod "f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" (UID: "f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.385793 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fgpnw"] Nov 25 08:26:57 crc kubenswrapper[4760]: W1125 08:26:57.399681 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68c768c5_3e1e_41a8_af21_c886ea5959a3.slice/crio-3c1da30509a2be05fef5c414cf9b353f76029c6ec9198f0e225e954e3f2fd339 WatchSource:0}: Error finding container 3c1da30509a2be05fef5c414cf9b353f76029c6ec9198f0e225e954e3f2fd339: Status 404 returned error can't find the container with id 3c1da30509a2be05fef5c414cf9b353f76029c6ec9198f0e225e954e3f2fd339 Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.400742 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-997jz"] Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.402150 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" (UID: "f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.407284 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-997jz"] Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.424095 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.424118 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.451876 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-7sfd5"] Nov 25 08:26:57 crc kubenswrapper[4760]: W1125 08:26:57.460838 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9756cb1d_9720_42d5_aa31_0a56e966c73f.slice/crio-40e208670eec0a42c7dc3f950970b2f7a3ad9ea2c2dca98b7cc2e5662f557ec2 WatchSource:0}: Error finding container 40e208670eec0a42c7dc3f950970b2f7a3ad9ea2c2dca98b7cc2e5662f557ec2: Status 404 returned error can't find the container with id 40e208670eec0a42c7dc3f950970b2f7a3ad9ea2c2dca98b7cc2e5662f557ec2 Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.653425 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 08:26:57 crc kubenswrapper[4760]: I1125 08:26:57.730494 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-jhwc6"] Nov 25 08:26:57 crc kubenswrapper[4760]: W1125 08:26:57.731363 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c68abf_1eb4_4516_a83d_0ca72287b9fd.slice/crio-0b9d8759db22a1ac92b7749b74bc743c181f3430c6e4e9232a0b794223fc687a WatchSource:0}: Error finding container 0b9d8759db22a1ac92b7749b74bc743c181f3430c6e4e9232a0b794223fc687a: Status 404 returned error can't find the container with id 0b9d8759db22a1ac92b7749b74bc743c181f3430c6e4e9232a0b794223fc687a Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.069864 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fgpnw" event={"ID":"68c768c5-3e1e-41a8-af21-c886ea5959a3","Type":"ContainerStarted","Data":"067f5a3828b4ff51e8fe24451204403488c59ddb8a6167e4d23ceeeef2dff9cc"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.070952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fgpnw" event={"ID":"68c768c5-3e1e-41a8-af21-c886ea5959a3","Type":"ContainerStarted","Data":"3c1da30509a2be05fef5c414cf9b353f76029c6ec9198f0e225e954e3f2fd339"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.074118 4760 generic.go:334] "Generic (PLEG): container finished" podID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerID="f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca" exitCode=0 Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.074320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" event={"ID":"9756cb1d-9720-42d5-aa31-0a56e966c73f","Type":"ContainerDied","Data":"f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.074371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" event={"ID":"9756cb1d-9720-42d5-aa31-0a56e966c73f","Type":"ContainerStarted","Data":"40e208670eec0a42c7dc3f950970b2f7a3ad9ea2c2dca98b7cc2e5662f557ec2"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.080838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" event={"ID":"f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd","Type":"ContainerDied","Data":"ab4664c8543e8c78632273c7177e48c13ce29d53032a7b635d686f4c69234358"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.081168 4760 scope.go:117] "RemoveContainer" containerID="ff9827046bf336d4a9d05ce398b7e24b1d56cb7ee86113f38eb2a30cb0ac6bd9" Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.080997 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-m6qmw" Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.083730 4760 generic.go:334] "Generic (PLEG): container finished" podID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerID="63916b9991003f9257788a992cace8f92c8af577ccac376c71aee79007dfcecd" exitCode=0 Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.083850 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" event={"ID":"16c68abf-1eb4-4516-a83d-0ca72287b9fd","Type":"ContainerDied","Data":"63916b9991003f9257788a992cace8f92c8af577ccac376c71aee79007dfcecd"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.083924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" event={"ID":"16c68abf-1eb4-4516-a83d-0ca72287b9fd","Type":"ContainerStarted","Data":"0b9d8759db22a1ac92b7749b74bc743c181f3430c6e4e9232a0b794223fc687a"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.087973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22e32299-69a7-4572-8ff1-1d2d409d5137","Type":"ContainerStarted","Data":"e1ecb1eb0be755eb141d210a67b74b23823095c3d73b83ce49ec364649a31817"} Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.088366 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fgpnw" podStartSLOduration=2.08835659 podStartE2EDuration="2.08835659s" podCreationTimestamp="2025-11-25 08:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:26:58.084766517 +0000 UTC m=+951.793797312" watchObservedRunningTime="2025-11-25 08:26:58.08835659 +0000 UTC m=+951.797387385" Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.276442 4760 scope.go:117] "RemoveContainer" containerID="8ef8959a6b71407b24357b1785d053238c8847e811d4c19155fd3b8fed672df9" Nov 25 08:26:58 crc kubenswrapper[4760]: E1125 08:26:58.302192 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a525b3_ee1d_47e0_97fe_49bbcb09f3dd.slice\": RecentStats: unable to find data in memory cache]" Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.315645 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-m6qmw"] Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.325444 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-m6qmw"] Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.947934 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" path="/var/lib/kubelet/pods/b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496/volumes" Nov 25 08:26:58 crc kubenswrapper[4760]: I1125 08:26:58.949017 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" path="/var/lib/kubelet/pods/f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd/volumes" Nov 25 08:26:59 crc kubenswrapper[4760]: I1125 08:26:59.097380 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" event={"ID":"9756cb1d-9720-42d5-aa31-0a56e966c73f","Type":"ContainerStarted","Data":"e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7"} Nov 25 08:26:59 crc kubenswrapper[4760]: I1125 08:26:59.097528 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:26:59 crc kubenswrapper[4760]: I1125 08:26:59.100526 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" event={"ID":"16c68abf-1eb4-4516-a83d-0ca72287b9fd","Type":"ContainerStarted","Data":"c97ce401e1a3e5141d783735c6986c875f4a1fe1686670c4a8b5f540970a80d4"} Nov 25 08:26:59 crc kubenswrapper[4760]: I1125 08:26:59.119296 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" podStartSLOduration=3.1192691 podStartE2EDuration="3.1192691s" podCreationTimestamp="2025-11-25 08:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:26:59.116085178 +0000 UTC m=+952.825115973" watchObservedRunningTime="2025-11-25 08:26:59.1192691 +0000 UTC m=+952.828299895" Nov 25 08:26:59 crc kubenswrapper[4760]: I1125 08:26:59.158269 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" podStartSLOduration=3.158215361 podStartE2EDuration="3.158215361s" podCreationTimestamp="2025-11-25 08:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:26:59.132537882 +0000 UTC m=+952.841568717" watchObservedRunningTime="2025-11-25 08:26:59.158215361 +0000 UTC m=+952.867246166" Nov 25 08:26:59 crc kubenswrapper[4760]: I1125 08:26:59.479573 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 08:26:59 crc kubenswrapper[4760]: I1125 08:26:59.479636 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 08:27:00 crc kubenswrapper[4760]: I1125 08:27:00.109225 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:27:00 crc kubenswrapper[4760]: I1125 08:27:00.762330 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 08:27:00 crc kubenswrapper[4760]: I1125 08:27:00.762727 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 08:27:03 crc kubenswrapper[4760]: I1125 08:27:03.037522 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 08:27:04 crc kubenswrapper[4760]: E1125 08:27:04.352513 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:50314->38.129.56.21:33427: write tcp 38.129.56.21:50314->38.129.56.21:33427: write: connection reset by peer Nov 25 08:27:05 crc kubenswrapper[4760]: I1125 08:27:05.147724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22e32299-69a7-4572-8ff1-1d2d409d5137","Type":"ContainerStarted","Data":"4c1f27282da80205abdb184f073f906e8eb3dd9d0de8e7b06893caf9c4b7a62d"} Nov 25 08:27:05 crc kubenswrapper[4760]: I1125 08:27:05.148022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22e32299-69a7-4572-8ff1-1d2d409d5137","Type":"ContainerStarted","Data":"0645359d0bf6d98fe1ff261257afabe7c5b47ad0930eac92625d544c7ad400ac"} Nov 25 08:27:05 crc kubenswrapper[4760]: I1125 08:27:05.148067 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 08:27:05 crc kubenswrapper[4760]: I1125 08:27:05.167462 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.885132547 podStartE2EDuration="9.167444465s" podCreationTimestamp="2025-11-25 08:26:56 +0000 UTC" firstStartedPulling="2025-11-25 08:26:57.656168973 +0000 UTC m=+951.365199778" lastFinishedPulling="2025-11-25 08:27:03.938480911 +0000 UTC m=+957.647511696" observedRunningTime="2025-11-25 08:27:05.164197192 +0000 UTC m=+958.873228007" watchObservedRunningTime="2025-11-25 08:27:05.167444465 +0000 UTC m=+958.876475260" Nov 25 08:27:05 crc kubenswrapper[4760]: I1125 08:27:05.580657 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 08:27:05 crc kubenswrapper[4760]: I1125 08:27:05.656275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.446873 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e5f4-account-create-lz7hl"] Nov 25 08:27:06 crc kubenswrapper[4760]: E1125 08:27:06.447637 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerName="dnsmasq-dns" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.447750 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerName="dnsmasq-dns" Nov 25 08:27:06 crc kubenswrapper[4760]: E1125 08:27:06.447821 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerName="dnsmasq-dns" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.447876 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerName="dnsmasq-dns" Nov 25 08:27:06 crc kubenswrapper[4760]: E1125 08:27:06.447938 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerName="init" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.448006 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerName="init" Nov 25 08:27:06 crc kubenswrapper[4760]: E1125 08:27:06.448081 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerName="init" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.448142 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerName="init" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.448376 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d7f47a-0d75-4b05-9ac8-3ecd4ea95496" containerName="dnsmasq-dns" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.448454 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a525b3-ee1d-47e0-97fe-49bbcb09f3dd" containerName="dnsmasq-dns" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.449009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.451564 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.458811 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e5f4-account-create-lz7hl"] Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.473661 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4ln57"] Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.474938 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.491431 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4ln57"] Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.518610 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn8wv\" (UniqueName: \"kubernetes.io/projected/29782cbf-c176-4549-95ca-9a4c6c439459-kube-api-access-xn8wv\") pod \"glance-db-create-4ln57\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.518700 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29782cbf-c176-4549-95ca-9a4c6c439459-operator-scripts\") pod \"glance-db-create-4ln57\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.518737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p9jf\" (UniqueName: \"kubernetes.io/projected/c9ac8fd8-1d1b-415e-963e-ad2242769cad-kube-api-access-5p9jf\") pod \"glance-e5f4-account-create-lz7hl\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.518806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9ac8fd8-1d1b-415e-963e-ad2242769cad-operator-scripts\") pod \"glance-e5f4-account-create-lz7hl\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.620996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29782cbf-c176-4549-95ca-9a4c6c439459-operator-scripts\") pod \"glance-db-create-4ln57\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.621956 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p9jf\" (UniqueName: \"kubernetes.io/projected/c9ac8fd8-1d1b-415e-963e-ad2242769cad-kube-api-access-5p9jf\") pod \"glance-e5f4-account-create-lz7hl\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.622000 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29782cbf-c176-4549-95ca-9a4c6c439459-operator-scripts\") pod \"glance-db-create-4ln57\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.622316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9ac8fd8-1d1b-415e-963e-ad2242769cad-operator-scripts\") pod \"glance-e5f4-account-create-lz7hl\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.622538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn8wv\" (UniqueName: \"kubernetes.io/projected/29782cbf-c176-4549-95ca-9a4c6c439459-kube-api-access-xn8wv\") pod \"glance-db-create-4ln57\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.625987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9ac8fd8-1d1b-415e-963e-ad2242769cad-operator-scripts\") pod \"glance-e5f4-account-create-lz7hl\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.640531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p9jf\" (UniqueName: \"kubernetes.io/projected/c9ac8fd8-1d1b-415e-963e-ad2242769cad-kube-api-access-5p9jf\") pod \"glance-e5f4-account-create-lz7hl\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.640560 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn8wv\" (UniqueName: \"kubernetes.io/projected/29782cbf-c176-4549-95ca-9a4c6c439459-kube-api-access-xn8wv\") pod \"glance-db-create-4ln57\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.719094 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.790949 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4ln57" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.794195 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:06 crc kubenswrapper[4760]: I1125 08:27:06.928512 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.015452 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-7sfd5"] Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.161365 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" podUID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerName="dnsmasq-dns" containerID="cri-o://e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7" gracePeriod=10 Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.309514 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4ln57"] Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.317988 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e5f4-account-create-lz7hl"] Nov 25 08:27:07 crc kubenswrapper[4760]: W1125 08:27:07.322120 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29782cbf_c176_4549_95ca_9a4c6c439459.slice/crio-893902e3a188b0ee165c8e95eb0d25b2c0fbea0004b4ee1a78e1f7c807e2caf0 WatchSource:0}: Error finding container 893902e3a188b0ee165c8e95eb0d25b2c0fbea0004b4ee1a78e1f7c807e2caf0: Status 404 returned error can't find the container with id 893902e3a188b0ee165c8e95eb0d25b2c0fbea0004b4ee1a78e1f7c807e2caf0 Nov 25 08:27:07 crc kubenswrapper[4760]: W1125 08:27:07.322538 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9ac8fd8_1d1b_415e_963e_ad2242769cad.slice/crio-55ace1f07ab89196ab1b8d16867ebe4e7c002e2510f9ffe743b846a69f3900a0 WatchSource:0}: Error finding container 55ace1f07ab89196ab1b8d16867ebe4e7c002e2510f9ffe743b846a69f3900a0: Status 404 returned error can't find the container with id 55ace1f07ab89196ab1b8d16867ebe4e7c002e2510f9ffe743b846a69f3900a0 Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.331967 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.588764 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.637318 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-config\") pod \"9756cb1d-9720-42d5-aa31-0a56e966c73f\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.637439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjj56\" (UniqueName: \"kubernetes.io/projected/9756cb1d-9720-42d5-aa31-0a56e966c73f-kube-api-access-tjj56\") pod \"9756cb1d-9720-42d5-aa31-0a56e966c73f\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.637524 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-dns-svc\") pod \"9756cb1d-9720-42d5-aa31-0a56e966c73f\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.637577 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-ovsdbserver-sb\") pod \"9756cb1d-9720-42d5-aa31-0a56e966c73f\" (UID: \"9756cb1d-9720-42d5-aa31-0a56e966c73f\") " Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.648795 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9756cb1d-9720-42d5-aa31-0a56e966c73f-kube-api-access-tjj56" (OuterVolumeSpecName: "kube-api-access-tjj56") pod "9756cb1d-9720-42d5-aa31-0a56e966c73f" (UID: "9756cb1d-9720-42d5-aa31-0a56e966c73f"). InnerVolumeSpecName "kube-api-access-tjj56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.678521 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9756cb1d-9720-42d5-aa31-0a56e966c73f" (UID: "9756cb1d-9720-42d5-aa31-0a56e966c73f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.679462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-config" (OuterVolumeSpecName: "config") pod "9756cb1d-9720-42d5-aa31-0a56e966c73f" (UID: "9756cb1d-9720-42d5-aa31-0a56e966c73f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.684137 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9756cb1d-9720-42d5-aa31-0a56e966c73f" (UID: "9756cb1d-9720-42d5-aa31-0a56e966c73f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.739720 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjj56\" (UniqueName: \"kubernetes.io/projected/9756cb1d-9720-42d5-aa31-0a56e966c73f-kube-api-access-tjj56\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.739766 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.739783 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:07 crc kubenswrapper[4760]: I1125 08:27:07.739794 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9756cb1d-9720-42d5-aa31-0a56e966c73f-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.169434 4760 generic.go:334] "Generic (PLEG): container finished" podID="29782cbf-c176-4549-95ca-9a4c6c439459" containerID="eab4a6959d031f2ed45f482e3d73d3251d2dd94880fe363dc6ee4b683e11032d" exitCode=0 Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.169545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4ln57" event={"ID":"29782cbf-c176-4549-95ca-9a4c6c439459","Type":"ContainerDied","Data":"eab4a6959d031f2ed45f482e3d73d3251d2dd94880fe363dc6ee4b683e11032d"} Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.169985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4ln57" event={"ID":"29782cbf-c176-4549-95ca-9a4c6c439459","Type":"ContainerStarted","Data":"893902e3a188b0ee165c8e95eb0d25b2c0fbea0004b4ee1a78e1f7c807e2caf0"} Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.172283 4760 generic.go:334] "Generic (PLEG): container finished" podID="c9ac8fd8-1d1b-415e-963e-ad2242769cad" containerID="d8be27c651b5994c2fb2ca53c6513db17d61105e2a40d6e59d21fd82a2d4592c" exitCode=0 Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.172329 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5f4-account-create-lz7hl" event={"ID":"c9ac8fd8-1d1b-415e-963e-ad2242769cad","Type":"ContainerDied","Data":"d8be27c651b5994c2fb2ca53c6513db17d61105e2a40d6e59d21fd82a2d4592c"} Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.172553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5f4-account-create-lz7hl" event={"ID":"c9ac8fd8-1d1b-415e-963e-ad2242769cad","Type":"ContainerStarted","Data":"55ace1f07ab89196ab1b8d16867ebe4e7c002e2510f9ffe743b846a69f3900a0"} Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.175036 4760 generic.go:334] "Generic (PLEG): container finished" podID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerID="e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7" exitCode=0 Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.175084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" event={"ID":"9756cb1d-9720-42d5-aa31-0a56e966c73f","Type":"ContainerDied","Data":"e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7"} Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.175106 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" event={"ID":"9756cb1d-9720-42d5-aa31-0a56e966c73f","Type":"ContainerDied","Data":"40e208670eec0a42c7dc3f950970b2f7a3ad9ea2c2dca98b7cc2e5662f557ec2"} Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.175123 4760 scope.go:117] "RemoveContainer" containerID="e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.175302 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-7sfd5" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.195651 4760 scope.go:117] "RemoveContainer" containerID="f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.221622 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-7sfd5"] Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.222235 4760 scope.go:117] "RemoveContainer" containerID="e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.223351 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-7sfd5"] Nov 25 08:27:08 crc kubenswrapper[4760]: E1125 08:27:08.223430 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7\": container with ID starting with e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7 not found: ID does not exist" containerID="e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.223458 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7"} err="failed to get container status \"e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7\": rpc error: code = NotFound desc = could not find container \"e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7\": container with ID starting with e67ea3aad2c9460acf80f38f36e237fbeb25a1be73d03319e05d7b3086b475b7 not found: ID does not exist" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.223479 4760 scope.go:117] "RemoveContainer" containerID="f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca" Nov 25 08:27:08 crc kubenswrapper[4760]: E1125 08:27:08.223699 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca\": container with ID starting with f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca not found: ID does not exist" containerID="f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.223732 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca"} err="failed to get container status \"f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca\": rpc error: code = NotFound desc = could not find container \"f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca\": container with ID starting with f94f400fcde9ad192cfb30eb32bf80d49dd54e4d566500caa8a94b1bdd6767ca not found: ID does not exist" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.848537 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.917502 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 08:27:08 crc kubenswrapper[4760]: I1125 08:27:08.953886 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9756cb1d-9720-42d5-aa31-0a56e966c73f" path="/var/lib/kubelet/pods/9756cb1d-9720-42d5-aa31-0a56e966c73f/volumes" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.619343 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4ln57" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.625461 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.690612 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9ac8fd8-1d1b-415e-963e-ad2242769cad-operator-scripts\") pod \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.690670 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29782cbf-c176-4549-95ca-9a4c6c439459-operator-scripts\") pod \"29782cbf-c176-4549-95ca-9a4c6c439459\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.690770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn8wv\" (UniqueName: \"kubernetes.io/projected/29782cbf-c176-4549-95ca-9a4c6c439459-kube-api-access-xn8wv\") pod \"29782cbf-c176-4549-95ca-9a4c6c439459\" (UID: \"29782cbf-c176-4549-95ca-9a4c6c439459\") " Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.690813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p9jf\" (UniqueName: \"kubernetes.io/projected/c9ac8fd8-1d1b-415e-963e-ad2242769cad-kube-api-access-5p9jf\") pod \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\" (UID: \"c9ac8fd8-1d1b-415e-963e-ad2242769cad\") " Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.691198 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29782cbf-c176-4549-95ca-9a4c6c439459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29782cbf-c176-4549-95ca-9a4c6c439459" (UID: "29782cbf-c176-4549-95ca-9a4c6c439459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.691692 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ac8fd8-1d1b-415e-963e-ad2242769cad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9ac8fd8-1d1b-415e-963e-ad2242769cad" (UID: "c9ac8fd8-1d1b-415e-963e-ad2242769cad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.695373 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29782cbf-c176-4549-95ca-9a4c6c439459-kube-api-access-xn8wv" (OuterVolumeSpecName: "kube-api-access-xn8wv") pod "29782cbf-c176-4549-95ca-9a4c6c439459" (UID: "29782cbf-c176-4549-95ca-9a4c6c439459"). InnerVolumeSpecName "kube-api-access-xn8wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.695469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ac8fd8-1d1b-415e-963e-ad2242769cad-kube-api-access-5p9jf" (OuterVolumeSpecName: "kube-api-access-5p9jf") pod "c9ac8fd8-1d1b-415e-963e-ad2242769cad" (UID: "c9ac8fd8-1d1b-415e-963e-ad2242769cad"). InnerVolumeSpecName "kube-api-access-5p9jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.792447 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p9jf\" (UniqueName: \"kubernetes.io/projected/c9ac8fd8-1d1b-415e-963e-ad2242769cad-kube-api-access-5p9jf\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.792503 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9ac8fd8-1d1b-415e-963e-ad2242769cad-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.792520 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29782cbf-c176-4549-95ca-9a4c6c439459-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:09 crc kubenswrapper[4760]: I1125 08:27:09.792530 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn8wv\" (UniqueName: \"kubernetes.io/projected/29782cbf-c176-4549-95ca-9a4c6c439459-kube-api-access-xn8wv\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.201762 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4ln57" event={"ID":"29782cbf-c176-4549-95ca-9a4c6c439459","Type":"ContainerDied","Data":"893902e3a188b0ee165c8e95eb0d25b2c0fbea0004b4ee1a78e1f7c807e2caf0"} Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.201795 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4ln57" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.201799 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893902e3a188b0ee165c8e95eb0d25b2c0fbea0004b4ee1a78e1f7c807e2caf0" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.203373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5f4-account-create-lz7hl" event={"ID":"c9ac8fd8-1d1b-415e-963e-ad2242769cad","Type":"ContainerDied","Data":"55ace1f07ab89196ab1b8d16867ebe4e7c002e2510f9ffe743b846a69f3900a0"} Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.203468 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5f4-account-create-lz7hl" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.206379 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ace1f07ab89196ab1b8d16867ebe4e7c002e2510f9ffe743b846a69f3900a0" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.810996 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-885wz"] Nov 25 08:27:10 crc kubenswrapper[4760]: E1125 08:27:10.811700 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerName="dnsmasq-dns" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.811723 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerName="dnsmasq-dns" Nov 25 08:27:10 crc kubenswrapper[4760]: E1125 08:27:10.811738 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ac8fd8-1d1b-415e-963e-ad2242769cad" containerName="mariadb-account-create" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.811746 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ac8fd8-1d1b-415e-963e-ad2242769cad" containerName="mariadb-account-create" Nov 25 08:27:10 crc kubenswrapper[4760]: E1125 08:27:10.811763 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29782cbf-c176-4549-95ca-9a4c6c439459" containerName="mariadb-database-create" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.811771 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="29782cbf-c176-4549-95ca-9a4c6c439459" containerName="mariadb-database-create" Nov 25 08:27:10 crc kubenswrapper[4760]: E1125 08:27:10.811809 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerName="init" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.811818 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerName="init" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.812010 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ac8fd8-1d1b-415e-963e-ad2242769cad" containerName="mariadb-account-create" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.812027 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9756cb1d-9720-42d5-aa31-0a56e966c73f" containerName="dnsmasq-dns" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.812040 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="29782cbf-c176-4549-95ca-9a4c6c439459" containerName="mariadb-database-create" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.812688 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-885wz" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.816857 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-885wz"] Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.913511 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fe7b-account-create-dwdsg"] Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.914740 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.917808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.918416 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b034ea-33e2-47fa-beb9-5c05687bc805-operator-scripts\") pod \"keystone-db-create-885wz\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " pod="openstack/keystone-db-create-885wz" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.918472 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtc57\" (UniqueName: \"kubernetes.io/projected/c3b034ea-33e2-47fa-beb9-5c05687bc805-kube-api-access-mtc57\") pod \"keystone-db-create-885wz\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " pod="openstack/keystone-db-create-885wz" Nov 25 08:27:10 crc kubenswrapper[4760]: I1125 08:27:10.921990 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fe7b-account-create-dwdsg"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.020484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0361740-20d2-4735-9d93-a5d2fe88b1e1-operator-scripts\") pod \"keystone-fe7b-account-create-dwdsg\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.020534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b034ea-33e2-47fa-beb9-5c05687bc805-operator-scripts\") pod \"keystone-db-create-885wz\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " pod="openstack/keystone-db-create-885wz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.020556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4hn\" (UniqueName: \"kubernetes.io/projected/b0361740-20d2-4735-9d93-a5d2fe88b1e1-kube-api-access-db4hn\") pod \"keystone-fe7b-account-create-dwdsg\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.020591 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtc57\" (UniqueName: \"kubernetes.io/projected/c3b034ea-33e2-47fa-beb9-5c05687bc805-kube-api-access-mtc57\") pod \"keystone-db-create-885wz\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " pod="openstack/keystone-db-create-885wz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.021493 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b034ea-33e2-47fa-beb9-5c05687bc805-operator-scripts\") pod \"keystone-db-create-885wz\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " pod="openstack/keystone-db-create-885wz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.041448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtc57\" (UniqueName: \"kubernetes.io/projected/c3b034ea-33e2-47fa-beb9-5c05687bc805-kube-api-access-mtc57\") pod \"keystone-db-create-885wz\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " pod="openstack/keystone-db-create-885wz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.068078 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vbtwz"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.069016 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.084014 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vbtwz"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.122979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0361740-20d2-4735-9d93-a5d2fe88b1e1-operator-scripts\") pod \"keystone-fe7b-account-create-dwdsg\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.123035 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db4hn\" (UniqueName: \"kubernetes.io/projected/b0361740-20d2-4735-9d93-a5d2fe88b1e1-kube-api-access-db4hn\") pod \"keystone-fe7b-account-create-dwdsg\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.124173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0361740-20d2-4735-9d93-a5d2fe88b1e1-operator-scripts\") pod \"keystone-fe7b-account-create-dwdsg\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.129070 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56e3-account-create-kn9lx"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.130332 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.134503 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.134599 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-885wz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.141568 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56e3-account-create-kn9lx"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.148198 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db4hn\" (UniqueName: \"kubernetes.io/projected/b0361740-20d2-4735-9d93-a5d2fe88b1e1-kube-api-access-db4hn\") pod \"keystone-fe7b-account-create-dwdsg\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.224328 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0c95ebd-5709-47f6-b0cc-518622250437-operator-scripts\") pod \"placement-db-create-vbtwz\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.224425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx8pv\" (UniqueName: \"kubernetes.io/projected/a0c95ebd-5709-47f6-b0cc-518622250437-kube-api-access-lx8pv\") pod \"placement-db-create-vbtwz\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.229640 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.325763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx8pv\" (UniqueName: \"kubernetes.io/projected/a0c95ebd-5709-47f6-b0cc-518622250437-kube-api-access-lx8pv\") pod \"placement-db-create-vbtwz\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.325827 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-operator-scripts\") pod \"placement-56e3-account-create-kn9lx\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.325870 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t866z\" (UniqueName: \"kubernetes.io/projected/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-kube-api-access-t866z\") pod \"placement-56e3-account-create-kn9lx\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.326059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0c95ebd-5709-47f6-b0cc-518622250437-operator-scripts\") pod \"placement-db-create-vbtwz\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.326910 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0c95ebd-5709-47f6-b0cc-518622250437-operator-scripts\") pod \"placement-db-create-vbtwz\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.346118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx8pv\" (UniqueName: \"kubernetes.io/projected/a0c95ebd-5709-47f6-b0cc-518622250437-kube-api-access-lx8pv\") pod \"placement-db-create-vbtwz\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.390909 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.427496 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-operator-scripts\") pod \"placement-56e3-account-create-kn9lx\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.427575 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t866z\" (UniqueName: \"kubernetes.io/projected/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-kube-api-access-t866z\") pod \"placement-56e3-account-create-kn9lx\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.428356 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-operator-scripts\") pod \"placement-56e3-account-create-kn9lx\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.451316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t866z\" (UniqueName: \"kubernetes.io/projected/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-kube-api-access-t866z\") pod \"placement-56e3-account-create-kn9lx\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.507985 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.574470 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-885wz"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.654992 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fe7b-account-create-dwdsg"] Nov 25 08:27:11 crc kubenswrapper[4760]: W1125 08:27:11.672111 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0361740_20d2_4735_9d93_a5d2fe88b1e1.slice/crio-7cd208c645562b8d1b6afb34459b961024f9f47c497d40a60c5c02bb9c260665 WatchSource:0}: Error finding container 7cd208c645562b8d1b6afb34459b961024f9f47c497d40a60c5c02bb9c260665: Status 404 returned error can't find the container with id 7cd208c645562b8d1b6afb34459b961024f9f47c497d40a60c5c02bb9c260665 Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.705704 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7slrm"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.719115 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.719662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7slrm"] Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.721525 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ngxbf" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.721913 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.834133 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pf4k\" (UniqueName: \"kubernetes.io/projected/098e59d2-c893-4917-b18b-d0ba993a45c5-kube-api-access-5pf4k\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.834492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-db-sync-config-data\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.834524 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-config-data\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.834642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-combined-ca-bundle\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.848774 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vbtwz"] Nov 25 08:27:11 crc kubenswrapper[4760]: W1125 08:27:11.851172 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0c95ebd_5709_47f6_b0cc_518622250437.slice/crio-c142052a36c7303a055527597f9a6c36d5908b281b0feeda001f95e692dc62f6 WatchSource:0}: Error finding container c142052a36c7303a055527597f9a6c36d5908b281b0feeda001f95e692dc62f6: Status 404 returned error can't find the container with id c142052a36c7303a055527597f9a6c36d5908b281b0feeda001f95e692dc62f6 Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.936552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-combined-ca-bundle\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.936665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pf4k\" (UniqueName: \"kubernetes.io/projected/098e59d2-c893-4917-b18b-d0ba993a45c5-kube-api-access-5pf4k\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.936705 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-db-sync-config-data\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.936725 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-config-data\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.942124 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-config-data\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.943004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-db-sync-config-data\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.943129 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-combined-ca-bundle\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.954704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pf4k\" (UniqueName: \"kubernetes.io/projected/098e59d2-c893-4917-b18b-d0ba993a45c5-kube-api-access-5pf4k\") pod \"glance-db-sync-7slrm\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:11 crc kubenswrapper[4760]: I1125 08:27:11.998097 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56e3-account-create-kn9lx"] Nov 25 08:27:12 crc kubenswrapper[4760]: W1125 08:27:12.060514 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea1c9c96_0d40_4b97_9463_85050a4b7bc8.slice/crio-d92f53ad83192c80f7c410ca01860ae82d87ca054f7bf7b64ecb38ad5295884b WatchSource:0}: Error finding container d92f53ad83192c80f7c410ca01860ae82d87ca054f7bf7b64ecb38ad5295884b: Status 404 returned error can't find the container with id d92f53ad83192c80f7c410ca01860ae82d87ca054f7bf7b64ecb38ad5295884b Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.079611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.220288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vbtwz" event={"ID":"a0c95ebd-5709-47f6-b0cc-518622250437","Type":"ContainerDied","Data":"cd265a4299e6f7eb5312d11ee89471e41eafd91f356d346e2b8bd9d2cec99ea1"} Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.220369 4760 generic.go:334] "Generic (PLEG): container finished" podID="a0c95ebd-5709-47f6-b0cc-518622250437" containerID="cd265a4299e6f7eb5312d11ee89471e41eafd91f356d346e2b8bd9d2cec99ea1" exitCode=0 Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.220558 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vbtwz" event={"ID":"a0c95ebd-5709-47f6-b0cc-518622250437","Type":"ContainerStarted","Data":"c142052a36c7303a055527597f9a6c36d5908b281b0feeda001f95e692dc62f6"} Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.222541 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56e3-account-create-kn9lx" event={"ID":"ea1c9c96-0d40-4b97-9463-85050a4b7bc8","Type":"ContainerStarted","Data":"d92f53ad83192c80f7c410ca01860ae82d87ca054f7bf7b64ecb38ad5295884b"} Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.224223 4760 generic.go:334] "Generic (PLEG): container finished" podID="b0361740-20d2-4735-9d93-a5d2fe88b1e1" containerID="cfe1cffae2b7612e2e384fee5b08a2b1be8bb1ae86211f44c9f3c3ec12f18af8" exitCode=0 Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.224281 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fe7b-account-create-dwdsg" event={"ID":"b0361740-20d2-4735-9d93-a5d2fe88b1e1","Type":"ContainerDied","Data":"cfe1cffae2b7612e2e384fee5b08a2b1be8bb1ae86211f44c9f3c3ec12f18af8"} Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.224297 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fe7b-account-create-dwdsg" event={"ID":"b0361740-20d2-4735-9d93-a5d2fe88b1e1","Type":"ContainerStarted","Data":"7cd208c645562b8d1b6afb34459b961024f9f47c497d40a60c5c02bb9c260665"} Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.225692 4760 generic.go:334] "Generic (PLEG): container finished" podID="c3b034ea-33e2-47fa-beb9-5c05687bc805" containerID="af1728b30f037c403bc5a54c88812af04bcbcc79d508fe6e072a4a73dcc810ea" exitCode=0 Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.225716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-885wz" event={"ID":"c3b034ea-33e2-47fa-beb9-5c05687bc805","Type":"ContainerDied","Data":"af1728b30f037c403bc5a54c88812af04bcbcc79d508fe6e072a4a73dcc810ea"} Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.225731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-885wz" event={"ID":"c3b034ea-33e2-47fa-beb9-5c05687bc805","Type":"ContainerStarted","Data":"4afd12be84054985ea16f47d7cd94874b88e68ae7f1bc991ebaf038943a1b3b1"} Nov 25 08:27:12 crc kubenswrapper[4760]: W1125 08:27:12.978974 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod098e59d2_c893_4917_b18b_d0ba993a45c5.slice/crio-cf4a51c7e6b4874bc5a1b63eb36927e8eda258537b2c7233a06736fded67efef WatchSource:0}: Error finding container cf4a51c7e6b4874bc5a1b63eb36927e8eda258537b2c7233a06736fded67efef: Status 404 returned error can't find the container with id cf4a51c7e6b4874bc5a1b63eb36927e8eda258537b2c7233a06736fded67efef Nov 25 08:27:12 crc kubenswrapper[4760]: I1125 08:27:12.984996 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7slrm"] Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.236938 4760 generic.go:334] "Generic (PLEG): container finished" podID="ea1c9c96-0d40-4b97-9463-85050a4b7bc8" containerID="f17ba7504c2fa48b7b56d9003e8c6e845b519fe9a5f05510b2c5eafd50289a7b" exitCode=0 Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.237019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56e3-account-create-kn9lx" event={"ID":"ea1c9c96-0d40-4b97-9463-85050a4b7bc8","Type":"ContainerDied","Data":"f17ba7504c2fa48b7b56d9003e8c6e845b519fe9a5f05510b2c5eafd50289a7b"} Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.239676 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7slrm" event={"ID":"098e59d2-c893-4917-b18b-d0ba993a45c5","Type":"ContainerStarted","Data":"cf4a51c7e6b4874bc5a1b63eb36927e8eda258537b2c7233a06736fded67efef"} Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.605282 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.701919 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.737861 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-885wz" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.763142 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db4hn\" (UniqueName: \"kubernetes.io/projected/b0361740-20d2-4735-9d93-a5d2fe88b1e1-kube-api-access-db4hn\") pod \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.764210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0361740-20d2-4735-9d93-a5d2fe88b1e1-operator-scripts\") pod \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\" (UID: \"b0361740-20d2-4735-9d93-a5d2fe88b1e1\") " Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.764908 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0361740-20d2-4735-9d93-a5d2fe88b1e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0361740-20d2-4735-9d93-a5d2fe88b1e1" (UID: "b0361740-20d2-4735-9d93-a5d2fe88b1e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.768146 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0361740-20d2-4735-9d93-a5d2fe88b1e1-kube-api-access-db4hn" (OuterVolumeSpecName: "kube-api-access-db4hn") pod "b0361740-20d2-4735-9d93-a5d2fe88b1e1" (UID: "b0361740-20d2-4735-9d93-a5d2fe88b1e1"). InnerVolumeSpecName "kube-api-access-db4hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.865975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0c95ebd-5709-47f6-b0cc-518622250437-operator-scripts\") pod \"a0c95ebd-5709-47f6-b0cc-518622250437\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.866019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b034ea-33e2-47fa-beb9-5c05687bc805-operator-scripts\") pod \"c3b034ea-33e2-47fa-beb9-5c05687bc805\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.866047 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtc57\" (UniqueName: \"kubernetes.io/projected/c3b034ea-33e2-47fa-beb9-5c05687bc805-kube-api-access-mtc57\") pod \"c3b034ea-33e2-47fa-beb9-5c05687bc805\" (UID: \"c3b034ea-33e2-47fa-beb9-5c05687bc805\") " Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.866109 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx8pv\" (UniqueName: \"kubernetes.io/projected/a0c95ebd-5709-47f6-b0cc-518622250437-kube-api-access-lx8pv\") pod \"a0c95ebd-5709-47f6-b0cc-518622250437\" (UID: \"a0c95ebd-5709-47f6-b0cc-518622250437\") " Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.866500 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db4hn\" (UniqueName: \"kubernetes.io/projected/b0361740-20d2-4735-9d93-a5d2fe88b1e1-kube-api-access-db4hn\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.866522 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0361740-20d2-4735-9d93-a5d2fe88b1e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.866643 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0c95ebd-5709-47f6-b0cc-518622250437-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0c95ebd-5709-47f6-b0cc-518622250437" (UID: "a0c95ebd-5709-47f6-b0cc-518622250437"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.867013 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b034ea-33e2-47fa-beb9-5c05687bc805-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3b034ea-33e2-47fa-beb9-5c05687bc805" (UID: "c3b034ea-33e2-47fa-beb9-5c05687bc805"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.870219 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b034ea-33e2-47fa-beb9-5c05687bc805-kube-api-access-mtc57" (OuterVolumeSpecName: "kube-api-access-mtc57") pod "c3b034ea-33e2-47fa-beb9-5c05687bc805" (UID: "c3b034ea-33e2-47fa-beb9-5c05687bc805"). InnerVolumeSpecName "kube-api-access-mtc57". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.870274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c95ebd-5709-47f6-b0cc-518622250437-kube-api-access-lx8pv" (OuterVolumeSpecName: "kube-api-access-lx8pv") pod "a0c95ebd-5709-47f6-b0cc-518622250437" (UID: "a0c95ebd-5709-47f6-b0cc-518622250437"). InnerVolumeSpecName "kube-api-access-lx8pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.967500 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx8pv\" (UniqueName: \"kubernetes.io/projected/a0c95ebd-5709-47f6-b0cc-518622250437-kube-api-access-lx8pv\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.967550 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0c95ebd-5709-47f6-b0cc-518622250437-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.967560 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b034ea-33e2-47fa-beb9-5c05687bc805-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:13 crc kubenswrapper[4760]: I1125 08:27:13.967569 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtc57\" (UniqueName: \"kubernetes.io/projected/c3b034ea-33e2-47fa-beb9-5c05687bc805-kube-api-access-mtc57\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.248911 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe7b-account-create-dwdsg" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.248929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fe7b-account-create-dwdsg" event={"ID":"b0361740-20d2-4735-9d93-a5d2fe88b1e1","Type":"ContainerDied","Data":"7cd208c645562b8d1b6afb34459b961024f9f47c497d40a60c5c02bb9c260665"} Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.248984 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cd208c645562b8d1b6afb34459b961024f9f47c497d40a60c5c02bb9c260665" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.250690 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-885wz" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.251418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-885wz" event={"ID":"c3b034ea-33e2-47fa-beb9-5c05687bc805","Type":"ContainerDied","Data":"4afd12be84054985ea16f47d7cd94874b88e68ae7f1bc991ebaf038943a1b3b1"} Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.251456 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afd12be84054985ea16f47d7cd94874b88e68ae7f1bc991ebaf038943a1b3b1" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.253008 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vbtwz" event={"ID":"a0c95ebd-5709-47f6-b0cc-518622250437","Type":"ContainerDied","Data":"c142052a36c7303a055527597f9a6c36d5908b281b0feeda001f95e692dc62f6"} Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.253113 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c142052a36c7303a055527597f9a6c36d5908b281b0feeda001f95e692dc62f6" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.253037 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vbtwz" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.512581 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.677833 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t866z\" (UniqueName: \"kubernetes.io/projected/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-kube-api-access-t866z\") pod \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.678048 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-operator-scripts\") pod \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\" (UID: \"ea1c9c96-0d40-4b97-9463-85050a4b7bc8\") " Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.678720 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea1c9c96-0d40-4b97-9463-85050a4b7bc8" (UID: "ea1c9c96-0d40-4b97-9463-85050a4b7bc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.706051 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-kube-api-access-t866z" (OuterVolumeSpecName: "kube-api-access-t866z") pod "ea1c9c96-0d40-4b97-9463-85050a4b7bc8" (UID: "ea1c9c96-0d40-4b97-9463-85050a4b7bc8"). InnerVolumeSpecName "kube-api-access-t866z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.780512 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t866z\" (UniqueName: \"kubernetes.io/projected/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-kube-api-access-t866z\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:14 crc kubenswrapper[4760]: I1125 08:27:14.780560 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1c9c96-0d40-4b97-9463-85050a4b7bc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:15 crc kubenswrapper[4760]: I1125 08:27:15.263637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56e3-account-create-kn9lx" event={"ID":"ea1c9c96-0d40-4b97-9463-85050a4b7bc8","Type":"ContainerDied","Data":"d92f53ad83192c80f7c410ca01860ae82d87ca054f7bf7b64ecb38ad5295884b"} Nov 25 08:27:15 crc kubenswrapper[4760]: I1125 08:27:15.263687 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92f53ad83192c80f7c410ca01860ae82d87ca054f7bf7b64ecb38ad5295884b" Nov 25 08:27:15 crc kubenswrapper[4760]: I1125 08:27:15.263692 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56e3-account-create-kn9lx" Nov 25 08:27:16 crc kubenswrapper[4760]: I1125 08:27:16.862143 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.666163 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wtp5g" podUID="7b050dee-2005-4a2b-8550-6f5d055a86b6" containerName="ovn-controller" probeResult="failure" output=< Nov 25 08:27:21 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 08:27:21 crc kubenswrapper[4760]: > Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.730385 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.731759 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kf25c" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.966554 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wtp5g-config-t9qt7"] Nov 25 08:27:21 crc kubenswrapper[4760]: E1125 08:27:21.966953 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c95ebd-5709-47f6-b0cc-518622250437" containerName="mariadb-database-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.966970 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c95ebd-5709-47f6-b0cc-518622250437" containerName="mariadb-database-create" Nov 25 08:27:21 crc kubenswrapper[4760]: E1125 08:27:21.966988 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b034ea-33e2-47fa-beb9-5c05687bc805" containerName="mariadb-database-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.966996 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b034ea-33e2-47fa-beb9-5c05687bc805" containerName="mariadb-database-create" Nov 25 08:27:21 crc kubenswrapper[4760]: E1125 08:27:21.967020 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0361740-20d2-4735-9d93-a5d2fe88b1e1" containerName="mariadb-account-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.967031 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0361740-20d2-4735-9d93-a5d2fe88b1e1" containerName="mariadb-account-create" Nov 25 08:27:21 crc kubenswrapper[4760]: E1125 08:27:21.967049 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1c9c96-0d40-4b97-9463-85050a4b7bc8" containerName="mariadb-account-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.967057 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1c9c96-0d40-4b97-9463-85050a4b7bc8" containerName="mariadb-account-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.967263 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0361740-20d2-4735-9d93-a5d2fe88b1e1" containerName="mariadb-account-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.967279 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1c9c96-0d40-4b97-9463-85050a4b7bc8" containerName="mariadb-account-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.967296 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c95ebd-5709-47f6-b0cc-518622250437" containerName="mariadb-database-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.967313 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b034ea-33e2-47fa-beb9-5c05687bc805" containerName="mariadb-database-create" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.967989 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.971049 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 08:27:21 crc kubenswrapper[4760]: I1125 08:27:21.973326 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wtp5g-config-t9qt7"] Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.104126 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-log-ovn\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.104189 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-scripts\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.104295 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run-ovn\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.104331 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-additional-scripts\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.104364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.104387 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcxv8\" (UniqueName: \"kubernetes.io/projected/92fb73a6-693c-4e5e-964c-a68ebb6119e3-kube-api-access-mcxv8\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205304 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-scripts\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run-ovn\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-additional-scripts\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205454 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcxv8\" (UniqueName: \"kubernetes.io/projected/92fb73a6-693c-4e5e-964c-a68ebb6119e3-kube-api-access-mcxv8\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205532 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-log-ovn\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-log-ovn\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run-ovn\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.205827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.206545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-additional-scripts\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.207412 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-scripts\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.224228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcxv8\" (UniqueName: \"kubernetes.io/projected/92fb73a6-693c-4e5e-964c-a68ebb6119e3-kube-api-access-mcxv8\") pod \"ovn-controller-wtp5g-config-t9qt7\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.296177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.323680 4760 generic.go:334] "Generic (PLEG): container finished" podID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerID="e0f65cbf20b69fcac39954194d3b9cfcddfcddfc66fab1a7b56132d9e8e38deb" exitCode=0 Nov 25 08:27:22 crc kubenswrapper[4760]: I1125 08:27:22.324790 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1de21d0-f4de-4294-a1b0-ec1328f46531","Type":"ContainerDied","Data":"e0f65cbf20b69fcac39954194d3b9cfcddfcddfc66fab1a7b56132d9e8e38deb"} Nov 25 08:27:23 crc kubenswrapper[4760]: I1125 08:27:23.335229 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerID="c9b801485c25de17cda2dabe57e1991d03968731843b911e0241cbab2acadee2" exitCode=0 Nov 25 08:27:23 crc kubenswrapper[4760]: I1125 08:27:23.335313 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d","Type":"ContainerDied","Data":"c9b801485c25de17cda2dabe57e1991d03968731843b911e0241cbab2acadee2"} Nov 25 08:27:23 crc kubenswrapper[4760]: I1125 08:27:23.344080 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1de21d0-f4de-4294-a1b0-ec1328f46531","Type":"ContainerStarted","Data":"cf2fa34095cd9cb121b2ff90fc68810c7964cd3310d3a4b05a29a8049971b547"} Nov 25 08:27:23 crc kubenswrapper[4760]: I1125 08:27:23.344754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:27:23 crc kubenswrapper[4760]: I1125 08:27:23.383449 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.434723373 podStartE2EDuration="57.383431305s" podCreationTimestamp="2025-11-25 08:26:26 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.426814648 +0000 UTC m=+934.135845443" lastFinishedPulling="2025-11-25 08:26:48.37552258 +0000 UTC m=+942.084553375" observedRunningTime="2025-11-25 08:27:23.381467311 +0000 UTC m=+977.090498116" watchObservedRunningTime="2025-11-25 08:27:23.383431305 +0000 UTC m=+977.092462100" Nov 25 08:27:23 crc kubenswrapper[4760]: I1125 08:27:23.427554 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wtp5g-config-t9qt7"] Nov 25 08:27:23 crc kubenswrapper[4760]: W1125 08:27:23.436811 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92fb73a6_693c_4e5e_964c_a68ebb6119e3.slice/crio-1f382f56e2805d49e5590df0487f8040a8b482fab73fbffbd3769777ffd35468 WatchSource:0}: Error finding container 1f382f56e2805d49e5590df0487f8040a8b482fab73fbffbd3769777ffd35468: Status 404 returned error can't find the container with id 1f382f56e2805d49e5590df0487f8040a8b482fab73fbffbd3769777ffd35468 Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.352327 4760 generic.go:334] "Generic (PLEG): container finished" podID="92fb73a6-693c-4e5e-964c-a68ebb6119e3" containerID="8b0f133493dddbd699c049cd7e3e2409af4216828301b50557f8d1c7dfacc1dc" exitCode=0 Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.352381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g-config-t9qt7" event={"ID":"92fb73a6-693c-4e5e-964c-a68ebb6119e3","Type":"ContainerDied","Data":"8b0f133493dddbd699c049cd7e3e2409af4216828301b50557f8d1c7dfacc1dc"} Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.352681 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g-config-t9qt7" event={"ID":"92fb73a6-693c-4e5e-964c-a68ebb6119e3","Type":"ContainerStarted","Data":"1f382f56e2805d49e5590df0487f8040a8b482fab73fbffbd3769777ffd35468"} Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.355619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7slrm" event={"ID":"098e59d2-c893-4917-b18b-d0ba993a45c5","Type":"ContainerStarted","Data":"061b9ba1c22a8cbe27b653389547ce134aa0ea069badec34203a07e49eb9f48e"} Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.360998 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d","Type":"ContainerStarted","Data":"a5b5032f75202681ff15f1849a5603fa93e68299a1d6ea58a8f9e77727a67d66"} Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.361579 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.408915 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7slrm" podStartSLOduration=3.358892242 podStartE2EDuration="13.408897779s" podCreationTimestamp="2025-11-25 08:27:11 +0000 UTC" firstStartedPulling="2025-11-25 08:27:12.981449108 +0000 UTC m=+966.690479903" lastFinishedPulling="2025-11-25 08:27:23.031454645 +0000 UTC m=+976.740485440" observedRunningTime="2025-11-25 08:27:24.394967482 +0000 UTC m=+978.103998277" watchObservedRunningTime="2025-11-25 08:27:24.408897779 +0000 UTC m=+978.117928574" Nov 25 08:27:24 crc kubenswrapper[4760]: I1125 08:27:24.429077 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.417539348 podStartE2EDuration="58.429059799s" podCreationTimestamp="2025-11-25 08:26:26 +0000 UTC" firstStartedPulling="2025-11-25 08:26:40.214613966 +0000 UTC m=+933.923644761" lastFinishedPulling="2025-11-25 08:26:48.226134417 +0000 UTC m=+941.935165212" observedRunningTime="2025-11-25 08:27:24.417088836 +0000 UTC m=+978.126119631" watchObservedRunningTime="2025-11-25 08:27:24.429059799 +0000 UTC m=+978.138090594" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.692026 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.767423 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-log-ovn\") pod \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.767505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run\") pod \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.767572 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-additional-scripts\") pod \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.767605 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcxv8\" (UniqueName: \"kubernetes.io/projected/92fb73a6-693c-4e5e-964c-a68ebb6119e3-kube-api-access-mcxv8\") pod \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.767653 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-scripts\") pod \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.767671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run-ovn\") pod \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\" (UID: \"92fb73a6-693c-4e5e-964c-a68ebb6119e3\") " Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.768001 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "92fb73a6-693c-4e5e-964c-a68ebb6119e3" (UID: "92fb73a6-693c-4e5e-964c-a68ebb6119e3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.768034 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "92fb73a6-693c-4e5e-964c-a68ebb6119e3" (UID: "92fb73a6-693c-4e5e-964c-a68ebb6119e3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.768056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run" (OuterVolumeSpecName: "var-run") pod "92fb73a6-693c-4e5e-964c-a68ebb6119e3" (UID: "92fb73a6-693c-4e5e-964c-a68ebb6119e3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.769291 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "92fb73a6-693c-4e5e-964c-a68ebb6119e3" (UID: "92fb73a6-693c-4e5e-964c-a68ebb6119e3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.769430 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-scripts" (OuterVolumeSpecName: "scripts") pod "92fb73a6-693c-4e5e-964c-a68ebb6119e3" (UID: "92fb73a6-693c-4e5e-964c-a68ebb6119e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.774909 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92fb73a6-693c-4e5e-964c-a68ebb6119e3-kube-api-access-mcxv8" (OuterVolumeSpecName: "kube-api-access-mcxv8") pod "92fb73a6-693c-4e5e-964c-a68ebb6119e3" (UID: "92fb73a6-693c-4e5e-964c-a68ebb6119e3"). InnerVolumeSpecName "kube-api-access-mcxv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.869086 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.869120 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.869129 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.869140 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92fb73a6-693c-4e5e-964c-a68ebb6119e3-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.869148 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92fb73a6-693c-4e5e-964c-a68ebb6119e3-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:25 crc kubenswrapper[4760]: I1125 08:27:25.869159 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcxv8\" (UniqueName: \"kubernetes.io/projected/92fb73a6-693c-4e5e-964c-a68ebb6119e3-kube-api-access-mcxv8\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.379298 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g-config-t9qt7" event={"ID":"92fb73a6-693c-4e5e-964c-a68ebb6119e3","Type":"ContainerDied","Data":"1f382f56e2805d49e5590df0487f8040a8b482fab73fbffbd3769777ffd35468"} Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.379350 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f382f56e2805d49e5590df0487f8040a8b482fab73fbffbd3769777ffd35468" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.379391 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-t9qt7" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.667797 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wtp5g" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.802265 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wtp5g-config-t9qt7"] Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.814071 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wtp5g-config-t9qt7"] Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.867735 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wtp5g-config-ct2w7"] Nov 25 08:27:26 crc kubenswrapper[4760]: E1125 08:27:26.868163 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92fb73a6-693c-4e5e-964c-a68ebb6119e3" containerName="ovn-config" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.868183 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="92fb73a6-693c-4e5e-964c-a68ebb6119e3" containerName="ovn-config" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.868429 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="92fb73a6-693c-4e5e-964c-a68ebb6119e3" containerName="ovn-config" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.869119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.871820 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.883221 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wtp5g-config-ct2w7"] Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.948009 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92fb73a6-693c-4e5e-964c-a68ebb6119e3" path="/var/lib/kubelet/pods/92fb73a6-693c-4e5e-964c-a68ebb6119e3/volumes" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.986521 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-scripts\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.986663 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-additional-scripts\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.986797 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run-ovn\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.986866 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t765\" (UniqueName: \"kubernetes.io/projected/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-kube-api-access-4t765\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.986892 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:26 crc kubenswrapper[4760]: I1125 08:27:26.986935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-log-ovn\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.089116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-additional-scripts\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.089591 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run-ovn\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.089700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t765\" (UniqueName: \"kubernetes.io/projected/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-kube-api-access-4t765\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.089867 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run-ovn\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.090001 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.089732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.090160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-log-ovn\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.090240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-additional-scripts\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.090276 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-log-ovn\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.090573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-scripts\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.092923 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-scripts\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.138126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t765\" (UniqueName: \"kubernetes.io/projected/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-kube-api-access-4t765\") pod \"ovn-controller-wtp5g-config-ct2w7\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.188710 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:27 crc kubenswrapper[4760]: I1125 08:27:27.752296 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wtp5g-config-ct2w7"] Nov 25 08:27:27 crc kubenswrapper[4760]: W1125 08:27:27.757467 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f7e0dd2_aaa4_4991_a16a_824a4c9fcf8b.slice/crio-36e64437509ef94c17c8fc1cf6bfddeb42313d26d4aa1f5c58f5bcacc4313e13 WatchSource:0}: Error finding container 36e64437509ef94c17c8fc1cf6bfddeb42313d26d4aa1f5c58f5bcacc4313e13: Status 404 returned error can't find the container with id 36e64437509ef94c17c8fc1cf6bfddeb42313d26d4aa1f5c58f5bcacc4313e13 Nov 25 08:27:28 crc kubenswrapper[4760]: I1125 08:27:28.405870 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" containerID="e344a34eb86dae415ae484b86556afe426e052c7e44dbcac25a9241f90d819ba" exitCode=0 Nov 25 08:27:28 crc kubenswrapper[4760]: I1125 08:27:28.405914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g-config-ct2w7" event={"ID":"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b","Type":"ContainerDied","Data":"e344a34eb86dae415ae484b86556afe426e052c7e44dbcac25a9241f90d819ba"} Nov 25 08:27:28 crc kubenswrapper[4760]: I1125 08:27:28.406218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g-config-ct2w7" event={"ID":"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b","Type":"ContainerStarted","Data":"36e64437509ef94c17c8fc1cf6bfddeb42313d26d4aa1f5c58f5bcacc4313e13"} Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.814691 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933477 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run\") pod \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run-ovn\") pod \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933586 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run" (OuterVolumeSpecName: "var-run") pod "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" (UID: "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933625 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t765\" (UniqueName: \"kubernetes.io/projected/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-kube-api-access-4t765\") pod \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933607 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" (UID: "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933714 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-additional-scripts\") pod \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933759 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-log-ovn\") pod \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933850 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-scripts\") pod \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\" (UID: \"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b\") " Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.933891 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" (UID: "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.934315 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.934337 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.934346 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.935091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-scripts" (OuterVolumeSpecName: "scripts") pod "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" (UID: "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.935506 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" (UID: "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:29 crc kubenswrapper[4760]: I1125 08:27:29.938781 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-kube-api-access-4t765" (OuterVolumeSpecName: "kube-api-access-4t765") pod "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" (UID: "0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b"). InnerVolumeSpecName "kube-api-access-4t765". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.036494 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t765\" (UniqueName: \"kubernetes.io/projected/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-kube-api-access-4t765\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.036525 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.036535 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.420717 4760 generic.go:334] "Generic (PLEG): container finished" podID="098e59d2-c893-4917-b18b-d0ba993a45c5" containerID="061b9ba1c22a8cbe27b653389547ce134aa0ea069badec34203a07e49eb9f48e" exitCode=0 Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.420796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7slrm" event={"ID":"098e59d2-c893-4917-b18b-d0ba993a45c5","Type":"ContainerDied","Data":"061b9ba1c22a8cbe27b653389547ce134aa0ea069badec34203a07e49eb9f48e"} Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.422936 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wtp5g-config-ct2w7" Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.422898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wtp5g-config-ct2w7" event={"ID":"0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b","Type":"ContainerDied","Data":"36e64437509ef94c17c8fc1cf6bfddeb42313d26d4aa1f5c58f5bcacc4313e13"} Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.423097 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e64437509ef94c17c8fc1cf6bfddeb42313d26d4aa1f5c58f5bcacc4313e13" Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.891814 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wtp5g-config-ct2w7"] Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.917089 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wtp5g-config-ct2w7"] Nov 25 08:27:30 crc kubenswrapper[4760]: I1125 08:27:30.946521 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" path="/var/lib/kubelet/pods/0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b/volumes" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.746303 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.746651 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.777814 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.864760 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pf4k\" (UniqueName: \"kubernetes.io/projected/098e59d2-c893-4917-b18b-d0ba993a45c5-kube-api-access-5pf4k\") pod \"098e59d2-c893-4917-b18b-d0ba993a45c5\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.864822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-combined-ca-bundle\") pod \"098e59d2-c893-4917-b18b-d0ba993a45c5\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.864881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-config-data\") pod \"098e59d2-c893-4917-b18b-d0ba993a45c5\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.864968 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-db-sync-config-data\") pod \"098e59d2-c893-4917-b18b-d0ba993a45c5\" (UID: \"098e59d2-c893-4917-b18b-d0ba993a45c5\") " Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.869534 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098e59d2-c893-4917-b18b-d0ba993a45c5-kube-api-access-5pf4k" (OuterVolumeSpecName: "kube-api-access-5pf4k") pod "098e59d2-c893-4917-b18b-d0ba993a45c5" (UID: "098e59d2-c893-4917-b18b-d0ba993a45c5"). InnerVolumeSpecName "kube-api-access-5pf4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.872461 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "098e59d2-c893-4917-b18b-d0ba993a45c5" (UID: "098e59d2-c893-4917-b18b-d0ba993a45c5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.887539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "098e59d2-c893-4917-b18b-d0ba993a45c5" (UID: "098e59d2-c893-4917-b18b-d0ba993a45c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.903588 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-config-data" (OuterVolumeSpecName: "config-data") pod "098e59d2-c893-4917-b18b-d0ba993a45c5" (UID: "098e59d2-c893-4917-b18b-d0ba993a45c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.967866 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.967930 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.967946 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pf4k\" (UniqueName: \"kubernetes.io/projected/098e59d2-c893-4917-b18b-d0ba993a45c5-kube-api-access-5pf4k\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:31 crc kubenswrapper[4760]: I1125 08:27:31.967957 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098e59d2-c893-4917-b18b-d0ba993a45c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.437484 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7slrm" event={"ID":"098e59d2-c893-4917-b18b-d0ba993a45c5","Type":"ContainerDied","Data":"cf4a51c7e6b4874bc5a1b63eb36927e8eda258537b2c7233a06736fded67efef"} Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.437705 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4a51c7e6b4874bc5a1b63eb36927e8eda258537b2c7233a06736fded67efef" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.437552 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7slrm" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.839555 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-569d458467-g8shq"] Nov 25 08:27:32 crc kubenswrapper[4760]: E1125 08:27:32.839880 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098e59d2-c893-4917-b18b-d0ba993a45c5" containerName="glance-db-sync" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.839895 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="098e59d2-c893-4917-b18b-d0ba993a45c5" containerName="glance-db-sync" Nov 25 08:27:32 crc kubenswrapper[4760]: E1125 08:27:32.839906 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" containerName="ovn-config" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.839913 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" containerName="ovn-config" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.840068 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f7e0dd2-aaa4-4991-a16a-824a4c9fcf8b" containerName="ovn-config" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.840083 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="098e59d2-c893-4917-b18b-d0ba993a45c5" containerName="glance-db-sync" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.840906 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.875487 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-569d458467-g8shq"] Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.885726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvrc\" (UniqueName: \"kubernetes.io/projected/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-kube-api-access-hmvrc\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.885901 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-sb\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.886026 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-nb\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.886086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-config\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.886166 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-dns-svc\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.987905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-nb\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.987982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-config\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.988040 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-dns-svc\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.988119 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmvrc\" (UniqueName: \"kubernetes.io/projected/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-kube-api-access-hmvrc\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.988259 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-sb\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.989096 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-dns-svc\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.989133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-nb\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.989226 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-config\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:32 crc kubenswrapper[4760]: I1125 08:27:32.989374 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-sb\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:33 crc kubenswrapper[4760]: I1125 08:27:33.008151 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmvrc\" (UniqueName: \"kubernetes.io/projected/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-kube-api-access-hmvrc\") pod \"dnsmasq-dns-569d458467-g8shq\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:33 crc kubenswrapper[4760]: I1125 08:27:33.160904 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:33 crc kubenswrapper[4760]: I1125 08:27:33.588129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-569d458467-g8shq"] Nov 25 08:27:34 crc kubenswrapper[4760]: I1125 08:27:34.452532 4760 generic.go:334] "Generic (PLEG): container finished" podID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerID="a9611e525499a3ad3bc15bcae667b51434fcc10e70e1e7b825b3cb7e11e9b3cf" exitCode=0 Nov 25 08:27:34 crc kubenswrapper[4760]: I1125 08:27:34.452582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-569d458467-g8shq" event={"ID":"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483","Type":"ContainerDied","Data":"a9611e525499a3ad3bc15bcae667b51434fcc10e70e1e7b825b3cb7e11e9b3cf"} Nov 25 08:27:34 crc kubenswrapper[4760]: I1125 08:27:34.453047 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-569d458467-g8shq" event={"ID":"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483","Type":"ContainerStarted","Data":"5eff68f5408e0c2f909387a6e786bc3a60a89cb2df1d00c83befb4570d335073"} Nov 25 08:27:35 crc kubenswrapper[4760]: I1125 08:27:35.463544 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-569d458467-g8shq" event={"ID":"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483","Type":"ContainerStarted","Data":"c9f5d9d79d2bb3441060848f7fd44891b54ea159d1672c2b86a769e1629f6a65"} Nov 25 08:27:35 crc kubenswrapper[4760]: I1125 08:27:35.464929 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:35 crc kubenswrapper[4760]: I1125 08:27:35.483787 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-569d458467-g8shq" podStartSLOduration=3.483769059 podStartE2EDuration="3.483769059s" podCreationTimestamp="2025-11-25 08:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:27:35.482295668 +0000 UTC m=+989.191326473" watchObservedRunningTime="2025-11-25 08:27:35.483769059 +0000 UTC m=+989.192799854" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.055928 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.325507 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xwpz5"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.326552 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.334367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.347281 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xwpz5"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.377208 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kzr\" (UniqueName: \"kubernetes.io/projected/5084c140-9bd7-4bbf-be7c-37270ee768f8-kube-api-access-g2kzr\") pod \"cinder-db-create-xwpz5\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.377308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5084c140-9bd7-4bbf-be7c-37270ee768f8-operator-scripts\") pod \"cinder-db-create-xwpz5\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.454222 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jqz7h"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.455183 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.468387 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-49bc-account-create-9f8xp"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.493972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kzr\" (UniqueName: \"kubernetes.io/projected/5084c140-9bd7-4bbf-be7c-37270ee768f8-kube-api-access-g2kzr\") pod \"cinder-db-create-xwpz5\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.494325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6c56\" (UniqueName: \"kubernetes.io/projected/2097afb5-f032-45c6-a7d4-52b45731db00-kube-api-access-d6c56\") pod \"barbican-db-create-jqz7h\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.494481 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5084c140-9bd7-4bbf-be7c-37270ee768f8-operator-scripts\") pod \"cinder-db-create-xwpz5\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.494656 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2097afb5-f032-45c6-a7d4-52b45731db00-operator-scripts\") pod \"barbican-db-create-jqz7h\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.497056 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5084c140-9bd7-4bbf-be7c-37270ee768f8-operator-scripts\") pod \"cinder-db-create-xwpz5\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.504398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqz7h"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.527238 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.548032 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-49bc-account-create-9f8xp"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.548575 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.557145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kzr\" (UniqueName: \"kubernetes.io/projected/5084c140-9bd7-4bbf-be7c-37270ee768f8-kube-api-access-g2kzr\") pod \"cinder-db-create-xwpz5\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.609170 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6c56\" (UniqueName: \"kubernetes.io/projected/2097afb5-f032-45c6-a7d4-52b45731db00-kube-api-access-d6c56\") pod \"barbican-db-create-jqz7h\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.609297 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2097afb5-f032-45c6-a7d4-52b45731db00-operator-scripts\") pod \"barbican-db-create-jqz7h\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.610057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2097afb5-f032-45c6-a7d4-52b45731db00-operator-scripts\") pod \"barbican-db-create-jqz7h\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.633971 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hllg2"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.635549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.645098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.646034 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6c56\" (UniqueName: \"kubernetes.io/projected/2097afb5-f032-45c6-a7d4-52b45731db00-kube-api-access-d6c56\") pod \"barbican-db-create-jqz7h\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.649905 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hllg2"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.672563 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-264d-account-create-bgw6r"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.673884 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.677134 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.684812 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-264d-account-create-bgw6r"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.717209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74dabb8e-81e2-4b92-ba89-436f1127473d-operator-scripts\") pod \"cinder-49bc-account-create-9f8xp\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.717276 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2hh\" (UniqueName: \"kubernetes.io/projected/74dabb8e-81e2-4b92-ba89-436f1127473d-kube-api-access-cj2hh\") pod \"cinder-49bc-account-create-9f8xp\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.790920 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.818790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f14c23c-cc14-47d9-89aa-b617eecd2d56-operator-scripts\") pod \"neutron-db-create-hllg2\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.818879 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9tt\" (UniqueName: \"kubernetes.io/projected/4396ce90-2b59-4cba-af25-9121fdb0fc28-kube-api-access-mz9tt\") pod \"barbican-264d-account-create-bgw6r\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.818920 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4396ce90-2b59-4cba-af25-9121fdb0fc28-operator-scripts\") pod \"barbican-264d-account-create-bgw6r\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.819051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74dabb8e-81e2-4b92-ba89-436f1127473d-operator-scripts\") pod \"cinder-49bc-account-create-9f8xp\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.819085 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj2hh\" (UniqueName: \"kubernetes.io/projected/74dabb8e-81e2-4b92-ba89-436f1127473d-kube-api-access-cj2hh\") pod \"cinder-49bc-account-create-9f8xp\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.819137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hztr\" (UniqueName: \"kubernetes.io/projected/6f14c23c-cc14-47d9-89aa-b617eecd2d56-kube-api-access-4hztr\") pod \"neutron-db-create-hllg2\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.820004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74dabb8e-81e2-4b92-ba89-436f1127473d-operator-scripts\") pod \"cinder-49bc-account-create-9f8xp\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.841421 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0309-account-create-vmfzr"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.843061 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.847860 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.855536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj2hh\" (UniqueName: \"kubernetes.io/projected/74dabb8e-81e2-4b92-ba89-436f1127473d-kube-api-access-cj2hh\") pod \"cinder-49bc-account-create-9f8xp\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.862109 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0309-account-create-vmfzr"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.886144 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.920091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9tt\" (UniqueName: \"kubernetes.io/projected/4396ce90-2b59-4cba-af25-9121fdb0fc28-kube-api-access-mz9tt\") pod \"barbican-264d-account-create-bgw6r\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.920383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4396ce90-2b59-4cba-af25-9121fdb0fc28-operator-scripts\") pod \"barbican-264d-account-create-bgw6r\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.920523 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hztr\" (UniqueName: \"kubernetes.io/projected/6f14c23c-cc14-47d9-89aa-b617eecd2d56-kube-api-access-4hztr\") pod \"neutron-db-create-hllg2\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.920556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f14c23c-cc14-47d9-89aa-b617eecd2d56-operator-scripts\") pod \"neutron-db-create-hllg2\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.921323 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f14c23c-cc14-47d9-89aa-b617eecd2d56-operator-scripts\") pod \"neutron-db-create-hllg2\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.922521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4396ce90-2b59-4cba-af25-9121fdb0fc28-operator-scripts\") pod \"barbican-264d-account-create-bgw6r\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.933202 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-w47d2"] Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.934405 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.939279 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.939470 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.939674 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sbjbt" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.939827 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.945405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hztr\" (UniqueName: \"kubernetes.io/projected/6f14c23c-cc14-47d9-89aa-b617eecd2d56-kube-api-access-4hztr\") pod \"neutron-db-create-hllg2\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.949459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9tt\" (UniqueName: \"kubernetes.io/projected/4396ce90-2b59-4cba-af25-9121fdb0fc28-kube-api-access-mz9tt\") pod \"barbican-264d-account-create-bgw6r\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:38 crc kubenswrapper[4760]: I1125 08:27:38.977369 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w47d2"] Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.021457 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04926c37-45b4-4ecf-82ff-9613687bb30d-operator-scripts\") pod \"neutron-0309-account-create-vmfzr\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.021533 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznfp\" (UniqueName: \"kubernetes.io/projected/04926c37-45b4-4ecf-82ff-9613687bb30d-kube-api-access-hznfp\") pod \"neutron-0309-account-create-vmfzr\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.021618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-combined-ca-bundle\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.021863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-config-data\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.022003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj4c9\" (UniqueName: \"kubernetes.io/projected/b6686072-680f-4070-87b2-07c886a28291-kube-api-access-nj4c9\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.052750 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.057730 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.123102 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-combined-ca-bundle\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.123203 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-config-data\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.123273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj4c9\" (UniqueName: \"kubernetes.io/projected/b6686072-680f-4070-87b2-07c886a28291-kube-api-access-nj4c9\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.123316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04926c37-45b4-4ecf-82ff-9613687bb30d-operator-scripts\") pod \"neutron-0309-account-create-vmfzr\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.123380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hznfp\" (UniqueName: \"kubernetes.io/projected/04926c37-45b4-4ecf-82ff-9613687bb30d-kube-api-access-hznfp\") pod \"neutron-0309-account-create-vmfzr\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.131324 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04926c37-45b4-4ecf-82ff-9613687bb30d-operator-scripts\") pod \"neutron-0309-account-create-vmfzr\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.134307 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-config-data\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.138921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-combined-ca-bundle\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.150677 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hznfp\" (UniqueName: \"kubernetes.io/projected/04926c37-45b4-4ecf-82ff-9613687bb30d-kube-api-access-hznfp\") pod \"neutron-0309-account-create-vmfzr\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.158714 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj4c9\" (UniqueName: \"kubernetes.io/projected/b6686072-680f-4070-87b2-07c886a28291-kube-api-access-nj4c9\") pod \"keystone-db-sync-w47d2\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.194474 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqz7h"] Nov 25 08:27:39 crc kubenswrapper[4760]: W1125 08:27:39.204930 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2097afb5_f032_45c6_a7d4_52b45731db00.slice/crio-e249d613eebf75b0373250bc437765d1a3c33c726c71a266a6900299a2994377 WatchSource:0}: Error finding container e249d613eebf75b0373250bc437765d1a3c33c726c71a266a6900299a2994377: Status 404 returned error can't find the container with id e249d613eebf75b0373250bc437765d1a3c33c726c71a266a6900299a2994377 Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.225658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.272987 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.373304 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xwpz5"] Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.381737 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-49bc-account-create-9f8xp"] Nov 25 08:27:39 crc kubenswrapper[4760]: W1125 08:27:39.493742 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5084c140_9bd7_4bbf_be7c_37270ee768f8.slice/crio-1578c8430a32810ecb48339bc0e5892d0587bb9da9e5564d35407e94a4e117b9 WatchSource:0}: Error finding container 1578c8430a32810ecb48339bc0e5892d0587bb9da9e5564d35407e94a4e117b9: Status 404 returned error can't find the container with id 1578c8430a32810ecb48339bc0e5892d0587bb9da9e5564d35407e94a4e117b9 Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.522053 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xwpz5" event={"ID":"5084c140-9bd7-4bbf-be7c-37270ee768f8","Type":"ContainerStarted","Data":"1578c8430a32810ecb48339bc0e5892d0587bb9da9e5564d35407e94a4e117b9"} Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.541720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-49bc-account-create-9f8xp" event={"ID":"74dabb8e-81e2-4b92-ba89-436f1127473d","Type":"ContainerStarted","Data":"bfe56915fa357ca57bb426417cf6db0cdfb31a4915c8106bb30c3d329828d45f"} Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.543529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqz7h" event={"ID":"2097afb5-f032-45c6-a7d4-52b45731db00","Type":"ContainerStarted","Data":"e249d613eebf75b0373250bc437765d1a3c33c726c71a266a6900299a2994377"} Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.705215 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hllg2"] Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.988662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-264d-account-create-bgw6r"] Nov 25 08:27:39 crc kubenswrapper[4760]: I1125 08:27:39.999582 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0309-account-create-vmfzr"] Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.220168 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w47d2"] Nov 25 08:27:40 crc kubenswrapper[4760]: W1125 08:27:40.243523 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6686072_680f_4070_87b2_07c886a28291.slice/crio-08e861729e2d3c6778287fc2653e4757115be4039bddce45a719869e95c6f3c5 WatchSource:0}: Error finding container 08e861729e2d3c6778287fc2653e4757115be4039bddce45a719869e95c6f3c5: Status 404 returned error can't find the container with id 08e861729e2d3c6778287fc2653e4757115be4039bddce45a719869e95c6f3c5 Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.552350 4760 generic.go:334] "Generic (PLEG): container finished" podID="2097afb5-f032-45c6-a7d4-52b45731db00" containerID="00dd4b6c4d2333e86ea4387c4861210196355f871789511c1bda617805e48779" exitCode=0 Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.552438 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqz7h" event={"ID":"2097afb5-f032-45c6-a7d4-52b45731db00","Type":"ContainerDied","Data":"00dd4b6c4d2333e86ea4387c4861210196355f871789511c1bda617805e48779"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.558301 4760 generic.go:334] "Generic (PLEG): container finished" podID="04926c37-45b4-4ecf-82ff-9613687bb30d" containerID="d684ef9a1354366f3683409404e0180c2f97fb0aeaa031c922a06844b177a1f2" exitCode=0 Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.558413 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0309-account-create-vmfzr" event={"ID":"04926c37-45b4-4ecf-82ff-9613687bb30d","Type":"ContainerDied","Data":"d684ef9a1354366f3683409404e0180c2f97fb0aeaa031c922a06844b177a1f2"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.558443 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0309-account-create-vmfzr" event={"ID":"04926c37-45b4-4ecf-82ff-9613687bb30d","Type":"ContainerStarted","Data":"35b698070e59987bf384f9575007ce847ec39aea55062017b581d6430a0c58f2"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.563337 4760 generic.go:334] "Generic (PLEG): container finished" podID="6f14c23c-cc14-47d9-89aa-b617eecd2d56" containerID="13d92cca116417133398ad6f495b7a2ae8826d5038b30aad12f9c5ea106afd79" exitCode=0 Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.563438 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hllg2" event={"ID":"6f14c23c-cc14-47d9-89aa-b617eecd2d56","Type":"ContainerDied","Data":"13d92cca116417133398ad6f495b7a2ae8826d5038b30aad12f9c5ea106afd79"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.563490 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hllg2" event={"ID":"6f14c23c-cc14-47d9-89aa-b617eecd2d56","Type":"ContainerStarted","Data":"e68cc31315b16db9aaeffa4b3b9098f63fd8f26b26417f118714b5075470e0fc"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.565743 4760 generic.go:334] "Generic (PLEG): container finished" podID="5084c140-9bd7-4bbf-be7c-37270ee768f8" containerID="b9f78ba9515147a8e5672ba6413b9ff3bc88109bc2a151ee340fcfcb12db5934" exitCode=0 Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.565917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xwpz5" event={"ID":"5084c140-9bd7-4bbf-be7c-37270ee768f8","Type":"ContainerDied","Data":"b9f78ba9515147a8e5672ba6413b9ff3bc88109bc2a151ee340fcfcb12db5934"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.573816 4760 generic.go:334] "Generic (PLEG): container finished" podID="4396ce90-2b59-4cba-af25-9121fdb0fc28" containerID="ae887807b72417fd7fa33a6c1b1f897826e7f2e2c1b51f530096a4cef78dc7ad" exitCode=0 Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.573888 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-264d-account-create-bgw6r" event={"ID":"4396ce90-2b59-4cba-af25-9121fdb0fc28","Type":"ContainerDied","Data":"ae887807b72417fd7fa33a6c1b1f897826e7f2e2c1b51f530096a4cef78dc7ad"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.573916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-264d-account-create-bgw6r" event={"ID":"4396ce90-2b59-4cba-af25-9121fdb0fc28","Type":"ContainerStarted","Data":"8e3dcb401a44fa2f0fb1a06d285128eb6e56ac40c00a146e518155909a5556e2"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.579021 4760 generic.go:334] "Generic (PLEG): container finished" podID="74dabb8e-81e2-4b92-ba89-436f1127473d" containerID="f97fb1034575c6317e545d393ef5a3a8b155df265adc7f0cf445a49b85110815" exitCode=0 Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.579207 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-49bc-account-create-9f8xp" event={"ID":"74dabb8e-81e2-4b92-ba89-436f1127473d","Type":"ContainerDied","Data":"f97fb1034575c6317e545d393ef5a3a8b155df265adc7f0cf445a49b85110815"} Nov 25 08:27:40 crc kubenswrapper[4760]: I1125 08:27:40.580621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w47d2" event={"ID":"b6686072-680f-4070-87b2-07c886a28291","Type":"ContainerStarted","Data":"08e861729e2d3c6778287fc2653e4757115be4039bddce45a719869e95c6f3c5"} Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.000596 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.182357 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hznfp\" (UniqueName: \"kubernetes.io/projected/04926c37-45b4-4ecf-82ff-9613687bb30d-kube-api-access-hznfp\") pod \"04926c37-45b4-4ecf-82ff-9613687bb30d\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.182409 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04926c37-45b4-4ecf-82ff-9613687bb30d-operator-scripts\") pod \"04926c37-45b4-4ecf-82ff-9613687bb30d\" (UID: \"04926c37-45b4-4ecf-82ff-9613687bb30d\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.183384 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04926c37-45b4-4ecf-82ff-9613687bb30d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04926c37-45b4-4ecf-82ff-9613687bb30d" (UID: "04926c37-45b4-4ecf-82ff-9613687bb30d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.194539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04926c37-45b4-4ecf-82ff-9613687bb30d-kube-api-access-hznfp" (OuterVolumeSpecName: "kube-api-access-hznfp") pod "04926c37-45b4-4ecf-82ff-9613687bb30d" (UID: "04926c37-45b4-4ecf-82ff-9613687bb30d"). InnerVolumeSpecName "kube-api-access-hznfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.284035 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hznfp\" (UniqueName: \"kubernetes.io/projected/04926c37-45b4-4ecf-82ff-9613687bb30d-kube-api-access-hznfp\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.284065 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04926c37-45b4-4ecf-82ff-9613687bb30d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.284314 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.294477 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.296455 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.316779 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.317343 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.385114 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74dabb8e-81e2-4b92-ba89-436f1127473d-operator-scripts\") pod \"74dabb8e-81e2-4b92-ba89-436f1127473d\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.385234 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2kzr\" (UniqueName: \"kubernetes.io/projected/5084c140-9bd7-4bbf-be7c-37270ee768f8-kube-api-access-g2kzr\") pod \"5084c140-9bd7-4bbf-be7c-37270ee768f8\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.385362 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5084c140-9bd7-4bbf-be7c-37270ee768f8-operator-scripts\") pod \"5084c140-9bd7-4bbf-be7c-37270ee768f8\" (UID: \"5084c140-9bd7-4bbf-be7c-37270ee768f8\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.385532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f14c23c-cc14-47d9-89aa-b617eecd2d56-operator-scripts\") pod \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.385578 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj2hh\" (UniqueName: \"kubernetes.io/projected/74dabb8e-81e2-4b92-ba89-436f1127473d-kube-api-access-cj2hh\") pod \"74dabb8e-81e2-4b92-ba89-436f1127473d\" (UID: \"74dabb8e-81e2-4b92-ba89-436f1127473d\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.385618 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hztr\" (UniqueName: \"kubernetes.io/projected/6f14c23c-cc14-47d9-89aa-b617eecd2d56-kube-api-access-4hztr\") pod \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\" (UID: \"6f14c23c-cc14-47d9-89aa-b617eecd2d56\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.386588 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5084c140-9bd7-4bbf-be7c-37270ee768f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5084c140-9bd7-4bbf-be7c-37270ee768f8" (UID: "5084c140-9bd7-4bbf-be7c-37270ee768f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.387352 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f14c23c-cc14-47d9-89aa-b617eecd2d56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f14c23c-cc14-47d9-89aa-b617eecd2d56" (UID: "6f14c23c-cc14-47d9-89aa-b617eecd2d56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.389553 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5084c140-9bd7-4bbf-be7c-37270ee768f8-kube-api-access-g2kzr" (OuterVolumeSpecName: "kube-api-access-g2kzr") pod "5084c140-9bd7-4bbf-be7c-37270ee768f8" (UID: "5084c140-9bd7-4bbf-be7c-37270ee768f8"). InnerVolumeSpecName "kube-api-access-g2kzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.390078 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74dabb8e-81e2-4b92-ba89-436f1127473d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74dabb8e-81e2-4b92-ba89-436f1127473d" (UID: "74dabb8e-81e2-4b92-ba89-436f1127473d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.395238 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dabb8e-81e2-4b92-ba89-436f1127473d-kube-api-access-cj2hh" (OuterVolumeSpecName: "kube-api-access-cj2hh") pod "74dabb8e-81e2-4b92-ba89-436f1127473d" (UID: "74dabb8e-81e2-4b92-ba89-436f1127473d"). InnerVolumeSpecName "kube-api-access-cj2hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.402446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f14c23c-cc14-47d9-89aa-b617eecd2d56-kube-api-access-4hztr" (OuterVolumeSpecName: "kube-api-access-4hztr") pod "6f14c23c-cc14-47d9-89aa-b617eecd2d56" (UID: "6f14c23c-cc14-47d9-89aa-b617eecd2d56"). InnerVolumeSpecName "kube-api-access-4hztr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487208 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6c56\" (UniqueName: \"kubernetes.io/projected/2097afb5-f032-45c6-a7d4-52b45731db00-kube-api-access-d6c56\") pod \"2097afb5-f032-45c6-a7d4-52b45731db00\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487414 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4396ce90-2b59-4cba-af25-9121fdb0fc28-operator-scripts\") pod \"4396ce90-2b59-4cba-af25-9121fdb0fc28\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487456 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz9tt\" (UniqueName: \"kubernetes.io/projected/4396ce90-2b59-4cba-af25-9121fdb0fc28-kube-api-access-mz9tt\") pod \"4396ce90-2b59-4cba-af25-9121fdb0fc28\" (UID: \"4396ce90-2b59-4cba-af25-9121fdb0fc28\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487555 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2097afb5-f032-45c6-a7d4-52b45731db00-operator-scripts\") pod \"2097afb5-f032-45c6-a7d4-52b45731db00\" (UID: \"2097afb5-f032-45c6-a7d4-52b45731db00\") " Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487864 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f14c23c-cc14-47d9-89aa-b617eecd2d56-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487863 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4396ce90-2b59-4cba-af25-9121fdb0fc28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4396ce90-2b59-4cba-af25-9121fdb0fc28" (UID: "4396ce90-2b59-4cba-af25-9121fdb0fc28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487885 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj2hh\" (UniqueName: \"kubernetes.io/projected/74dabb8e-81e2-4b92-ba89-436f1127473d-kube-api-access-cj2hh\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.487987 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hztr\" (UniqueName: \"kubernetes.io/projected/6f14c23c-cc14-47d9-89aa-b617eecd2d56-kube-api-access-4hztr\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.488003 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74dabb8e-81e2-4b92-ba89-436f1127473d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.488014 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2kzr\" (UniqueName: \"kubernetes.io/projected/5084c140-9bd7-4bbf-be7c-37270ee768f8-kube-api-access-g2kzr\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.488114 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5084c140-9bd7-4bbf-be7c-37270ee768f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.488181 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2097afb5-f032-45c6-a7d4-52b45731db00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2097afb5-f032-45c6-a7d4-52b45731db00" (UID: "2097afb5-f032-45c6-a7d4-52b45731db00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.491754 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4396ce90-2b59-4cba-af25-9121fdb0fc28-kube-api-access-mz9tt" (OuterVolumeSpecName: "kube-api-access-mz9tt") pod "4396ce90-2b59-4cba-af25-9121fdb0fc28" (UID: "4396ce90-2b59-4cba-af25-9121fdb0fc28"). InnerVolumeSpecName "kube-api-access-mz9tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.491823 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2097afb5-f032-45c6-a7d4-52b45731db00-kube-api-access-d6c56" (OuterVolumeSpecName: "kube-api-access-d6c56") pod "2097afb5-f032-45c6-a7d4-52b45731db00" (UID: "2097afb5-f032-45c6-a7d4-52b45731db00"). InnerVolumeSpecName "kube-api-access-d6c56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.589745 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4396ce90-2b59-4cba-af25-9121fdb0fc28-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.589777 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz9tt\" (UniqueName: \"kubernetes.io/projected/4396ce90-2b59-4cba-af25-9121fdb0fc28-kube-api-access-mz9tt\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.589788 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2097afb5-f032-45c6-a7d4-52b45731db00-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.589808 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6c56\" (UniqueName: \"kubernetes.io/projected/2097afb5-f032-45c6-a7d4-52b45731db00-kube-api-access-d6c56\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.602635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xwpz5" event={"ID":"5084c140-9bd7-4bbf-be7c-37270ee768f8","Type":"ContainerDied","Data":"1578c8430a32810ecb48339bc0e5892d0587bb9da9e5564d35407e94a4e117b9"} Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.602672 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xwpz5" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.602676 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1578c8430a32810ecb48339bc0e5892d0587bb9da9e5564d35407e94a4e117b9" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.604472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-264d-account-create-bgw6r" event={"ID":"4396ce90-2b59-4cba-af25-9121fdb0fc28","Type":"ContainerDied","Data":"8e3dcb401a44fa2f0fb1a06d285128eb6e56ac40c00a146e518155909a5556e2"} Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.604505 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e3dcb401a44fa2f0fb1a06d285128eb6e56ac40c00a146e518155909a5556e2" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.604556 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-264d-account-create-bgw6r" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.611365 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-49bc-account-create-9f8xp" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.611353 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-49bc-account-create-9f8xp" event={"ID":"74dabb8e-81e2-4b92-ba89-436f1127473d","Type":"ContainerDied","Data":"bfe56915fa357ca57bb426417cf6db0cdfb31a4915c8106bb30c3d329828d45f"} Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.611502 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe56915fa357ca57bb426417cf6db0cdfb31a4915c8106bb30c3d329828d45f" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.613821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqz7h" event={"ID":"2097afb5-f032-45c6-a7d4-52b45731db00","Type":"ContainerDied","Data":"e249d613eebf75b0373250bc437765d1a3c33c726c71a266a6900299a2994377"} Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.613876 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e249d613eebf75b0373250bc437765d1a3c33c726c71a266a6900299a2994377" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.613953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqz7h" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.618364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0309-account-create-vmfzr" event={"ID":"04926c37-45b4-4ecf-82ff-9613687bb30d","Type":"ContainerDied","Data":"35b698070e59987bf384f9575007ce847ec39aea55062017b581d6430a0c58f2"} Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.618397 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b698070e59987bf384f9575007ce847ec39aea55062017b581d6430a0c58f2" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.618446 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0309-account-create-vmfzr" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.624108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hllg2" event={"ID":"6f14c23c-cc14-47d9-89aa-b617eecd2d56","Type":"ContainerDied","Data":"e68cc31315b16db9aaeffa4b3b9098f63fd8f26b26417f118714b5075470e0fc"} Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.624150 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e68cc31315b16db9aaeffa4b3b9098f63fd8f26b26417f118714b5075470e0fc" Nov 25 08:27:42 crc kubenswrapper[4760]: I1125 08:27:42.624207 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hllg2" Nov 25 08:27:43 crc kubenswrapper[4760]: I1125 08:27:43.163436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:27:43 crc kubenswrapper[4760]: I1125 08:27:43.223951 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-jhwc6"] Nov 25 08:27:43 crc kubenswrapper[4760]: I1125 08:27:43.225896 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" podUID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerName="dnsmasq-dns" containerID="cri-o://c97ce401e1a3e5141d783735c6986c875f4a1fe1686670c4a8b5f540970a80d4" gracePeriod=10 Nov 25 08:27:43 crc kubenswrapper[4760]: I1125 08:27:43.647485 4760 generic.go:334] "Generic (PLEG): container finished" podID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerID="c97ce401e1a3e5141d783735c6986c875f4a1fe1686670c4a8b5f540970a80d4" exitCode=0 Nov 25 08:27:43 crc kubenswrapper[4760]: I1125 08:27:43.647569 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" event={"ID":"16c68abf-1eb4-4516-a83d-0ca72287b9fd","Type":"ContainerDied","Data":"c97ce401e1a3e5141d783735c6986c875f4a1fe1686670c4a8b5f540970a80d4"} Nov 25 08:27:45 crc kubenswrapper[4760]: I1125 08:27:45.966721 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.045982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-nb\") pod \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.046306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-dns-svc\") pod \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.046439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-config\") pod \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.046535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-sb\") pod \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.046675 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr9sb\" (UniqueName: \"kubernetes.io/projected/16c68abf-1eb4-4516-a83d-0ca72287b9fd-kube-api-access-dr9sb\") pod \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\" (UID: \"16c68abf-1eb4-4516-a83d-0ca72287b9fd\") " Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.058525 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c68abf-1eb4-4516-a83d-0ca72287b9fd-kube-api-access-dr9sb" (OuterVolumeSpecName: "kube-api-access-dr9sb") pod "16c68abf-1eb4-4516-a83d-0ca72287b9fd" (UID: "16c68abf-1eb4-4516-a83d-0ca72287b9fd"). InnerVolumeSpecName "kube-api-access-dr9sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.090209 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16c68abf-1eb4-4516-a83d-0ca72287b9fd" (UID: "16c68abf-1eb4-4516-a83d-0ca72287b9fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.091942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-config" (OuterVolumeSpecName: "config") pod "16c68abf-1eb4-4516-a83d-0ca72287b9fd" (UID: "16c68abf-1eb4-4516-a83d-0ca72287b9fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.092690 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16c68abf-1eb4-4516-a83d-0ca72287b9fd" (UID: "16c68abf-1eb4-4516-a83d-0ca72287b9fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.097962 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16c68abf-1eb4-4516-a83d-0ca72287b9fd" (UID: "16c68abf-1eb4-4516-a83d-0ca72287b9fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.148727 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.148763 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr9sb\" (UniqueName: \"kubernetes.io/projected/16c68abf-1eb4-4516-a83d-0ca72287b9fd-kube-api-access-dr9sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.148774 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.148785 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.148794 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c68abf-1eb4-4516-a83d-0ca72287b9fd-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.687640 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" event={"ID":"16c68abf-1eb4-4516-a83d-0ca72287b9fd","Type":"ContainerDied","Data":"0b9d8759db22a1ac92b7749b74bc743c181f3430c6e4e9232a0b794223fc687a"} Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.687933 4760 scope.go:117] "RemoveContainer" containerID="c97ce401e1a3e5141d783735c6986c875f4a1fe1686670c4a8b5f540970a80d4" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.687654 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-jhwc6" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.692075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w47d2" event={"ID":"b6686072-680f-4070-87b2-07c886a28291","Type":"ContainerStarted","Data":"082a1d3b7bfd7d975171c03d8c2f49a043d0397a830052b9bf5ee76c2e72e569"} Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.713984 4760 scope.go:117] "RemoveContainer" containerID="63916b9991003f9257788a992cace8f92c8af577ccac376c71aee79007dfcecd" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.716559 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-w47d2" podStartSLOduration=3.270551033 podStartE2EDuration="8.716541999s" podCreationTimestamp="2025-11-25 08:27:38 +0000 UTC" firstStartedPulling="2025-11-25 08:27:40.245885361 +0000 UTC m=+993.954916156" lastFinishedPulling="2025-11-25 08:27:45.691876327 +0000 UTC m=+999.400907122" observedRunningTime="2025-11-25 08:27:46.708486455 +0000 UTC m=+1000.417517250" watchObservedRunningTime="2025-11-25 08:27:46.716541999 +0000 UTC m=+1000.425572794" Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.735832 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-jhwc6"] Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.742455 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-jhwc6"] Nov 25 08:27:46 crc kubenswrapper[4760]: I1125 08:27:46.948211 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" path="/var/lib/kubelet/pods/16c68abf-1eb4-4516-a83d-0ca72287b9fd/volumes" Nov 25 08:27:49 crc kubenswrapper[4760]: I1125 08:27:49.717642 4760 generic.go:334] "Generic (PLEG): container finished" podID="b6686072-680f-4070-87b2-07c886a28291" containerID="082a1d3b7bfd7d975171c03d8c2f49a043d0397a830052b9bf5ee76c2e72e569" exitCode=0 Nov 25 08:27:49 crc kubenswrapper[4760]: I1125 08:27:49.717724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w47d2" event={"ID":"b6686072-680f-4070-87b2-07c886a28291","Type":"ContainerDied","Data":"082a1d3b7bfd7d975171c03d8c2f49a043d0397a830052b9bf5ee76c2e72e569"} Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.050937 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.146117 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-combined-ca-bundle\") pod \"b6686072-680f-4070-87b2-07c886a28291\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.146233 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-config-data\") pod \"b6686072-680f-4070-87b2-07c886a28291\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.146331 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj4c9\" (UniqueName: \"kubernetes.io/projected/b6686072-680f-4070-87b2-07c886a28291-kube-api-access-nj4c9\") pod \"b6686072-680f-4070-87b2-07c886a28291\" (UID: \"b6686072-680f-4070-87b2-07c886a28291\") " Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.152508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6686072-680f-4070-87b2-07c886a28291-kube-api-access-nj4c9" (OuterVolumeSpecName: "kube-api-access-nj4c9") pod "b6686072-680f-4070-87b2-07c886a28291" (UID: "b6686072-680f-4070-87b2-07c886a28291"). InnerVolumeSpecName "kube-api-access-nj4c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.168998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6686072-680f-4070-87b2-07c886a28291" (UID: "b6686072-680f-4070-87b2-07c886a28291"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.194557 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-config-data" (OuterVolumeSpecName: "config-data") pod "b6686072-680f-4070-87b2-07c886a28291" (UID: "b6686072-680f-4070-87b2-07c886a28291"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.248011 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.248331 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6686072-680f-4070-87b2-07c886a28291-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.248341 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj4c9\" (UniqueName: \"kubernetes.io/projected/b6686072-680f-4070-87b2-07c886a28291-kube-api-access-nj4c9\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.734703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w47d2" event={"ID":"b6686072-680f-4070-87b2-07c886a28291","Type":"ContainerDied","Data":"08e861729e2d3c6778287fc2653e4757115be4039bddce45a719869e95c6f3c5"} Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.734764 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08e861729e2d3c6778287fc2653e4757115be4039bddce45a719869e95c6f3c5" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.734939 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w47d2" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.925102 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b76c757b7-7vgvc"] Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.964738 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4396ce90-2b59-4cba-af25-9121fdb0fc28" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.964962 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4396ce90-2b59-4cba-af25-9121fdb0fc28" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.965051 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04926c37-45b4-4ecf-82ff-9613687bb30d" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.965125 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="04926c37-45b4-4ecf-82ff-9613687bb30d" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.965215 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5084c140-9bd7-4bbf-be7c-37270ee768f8" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.965911 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5084c140-9bd7-4bbf-be7c-37270ee768f8" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.965998 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6686072-680f-4070-87b2-07c886a28291" containerName="keystone-db-sync" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.966053 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6686072-680f-4070-87b2-07c886a28291" containerName="keystone-db-sync" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.966126 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerName="init" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.966187 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerName="init" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.966276 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2097afb5-f032-45c6-a7d4-52b45731db00" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.966342 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097afb5-f032-45c6-a7d4-52b45731db00" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.966421 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerName="dnsmasq-dns" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.966486 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerName="dnsmasq-dns" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.966566 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f14c23c-cc14-47d9-89aa-b617eecd2d56" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.966643 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f14c23c-cc14-47d9-89aa-b617eecd2d56" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: E1125 08:27:51.966731 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dabb8e-81e2-4b92-ba89-436f1127473d" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.966793 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dabb8e-81e2-4b92-ba89-436f1127473d" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.967405 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="74dabb8e-81e2-4b92-ba89-436f1127473d" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.969060 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c68abf-1eb4-4516-a83d-0ca72287b9fd" containerName="dnsmasq-dns" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.969146 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4396ce90-2b59-4cba-af25-9121fdb0fc28" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.969226 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f14c23c-cc14-47d9-89aa-b617eecd2d56" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.969327 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="04926c37-45b4-4ecf-82ff-9613687bb30d" containerName="mariadb-account-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.969401 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6686072-680f-4070-87b2-07c886a28291" containerName="keystone-db-sync" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.969482 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2097afb5-f032-45c6-a7d4-52b45731db00" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.969584 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5084c140-9bd7-4bbf-be7c-37270ee768f8" containerName="mariadb-database-create" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.971208 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7xnkp"] Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.972319 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:51 crc kubenswrapper[4760]: I1125 08:27:51.997356 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b76c757b7-7vgvc"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:51.997640 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.019430 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7xnkp"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.020849 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sbjbt" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.021830 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.043333 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.043619 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.049432 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.140572 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5zhtm"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.154709 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167037 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-config\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167103 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7cql\" (UniqueName: \"kubernetes.io/projected/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-kube-api-access-p7cql\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167154 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-nb\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-config-data\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g2cb\" (UniqueName: \"kubernetes.io/projected/60e0dc86-edc9-45a5-a429-daa4b2d7343f-kube-api-access-5g2cb\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-credential-keys\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-fernet-keys\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167284 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-sb\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167305 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-dns-svc\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167327 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-scripts\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-combined-ca-bundle\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167654 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.167887 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.168171 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ljtn8" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.175551 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cb7678ff9-6sgdj"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.176984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.180707 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-85rjz" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.191036 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.191120 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.191220 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.211821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5zhtm"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.267321 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cb7678ff9-6sgdj"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.269702 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-combined-ca-bundle\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.269777 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-config\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.269832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7cql\" (UniqueName: \"kubernetes.io/projected/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-kube-api-access-p7cql\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.269868 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-config\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.269918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-nb\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.269941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmw8\" (UniqueName: \"kubernetes.io/projected/5394304b-1d0b-496b-9c30-383d1822341a-kube-api-access-2pmw8\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.269989 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-config-data\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.270012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g2cb\" (UniqueName: \"kubernetes.io/projected/60e0dc86-edc9-45a5-a429-daa4b2d7343f-kube-api-access-5g2cb\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.270048 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-credential-keys\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.270068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-fernet-keys\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.270085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-combined-ca-bundle\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.270118 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-sb\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.270140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-dns-svc\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.270174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-scripts\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.275441 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-dns-svc\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.276440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-sb\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.279107 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-config-data\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.279885 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-config\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.282470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-scripts\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.285839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-combined-ca-bundle\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.285904 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pk2zm"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.286959 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.291966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-fernet-keys\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.298360 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qvh9g" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.298544 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.298670 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.299156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-nb\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.299448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-credential-keys\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.316843 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wrwr6"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.318100 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.328904 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pk2zm"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.331664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7cql\" (UniqueName: \"kubernetes.io/projected/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-kube-api-access-p7cql\") pod \"dnsmasq-dns-b76c757b7-7vgvc\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.334982 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g2cb\" (UniqueName: \"kubernetes.io/projected/60e0dc86-edc9-45a5-a429-daa4b2d7343f-kube-api-access-5g2cb\") pod \"keystone-bootstrap-7xnkp\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.335432 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.343078 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.349338 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b76c757b7-7vgvc"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.349911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.355135 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b98zq" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.382558 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.384639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-combined-ca-bundle\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.384773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-scripts\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.384915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dq67\" (UniqueName: \"kubernetes.io/projected/2bd46062-7573-4651-a59d-f32a136433b8-kube-api-access-2dq67\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385010 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-logs\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385134 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4pd\" (UniqueName: \"kubernetes.io/projected/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-kube-api-access-qd4pd\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385269 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-config-data\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-db-sync-config-data\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-scripts\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-scripts\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385622 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99920db5-d382-4159-a705-53428f8a61a8-etc-machine-id\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385734 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-combined-ca-bundle\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-config\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.385941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9gqv\" (UniqueName: \"kubernetes.io/projected/99920db5-d382-4159-a705-53428f8a61a8-kube-api-access-h9gqv\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.386016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-combined-ca-bundle\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.386085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-config-data\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.386161 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-config-data\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.386261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-horizon-secret-key\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.386345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bd46062-7573-4651-a59d-f32a136433b8-logs\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.386426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmw8\" (UniqueName: \"kubernetes.io/projected/5394304b-1d0b-496b-9c30-383d1822341a-kube-api-access-2pmw8\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.397942 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-combined-ca-bundle\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.427864 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wrwr6"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.428013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-config\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.447330 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.450002 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.469601 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.469841 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.478921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmw8\" (UniqueName: \"kubernetes.io/projected/5394304b-1d0b-496b-9c30-383d1822341a-kube-api-access-2pmw8\") pod \"neutron-db-sync-5zhtm\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.487192 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488112 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-scripts\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-scripts\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99920db5-d382-4159-a705-53428f8a61a8-etc-machine-id\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488219 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-combined-ca-bundle\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9gqv\" (UniqueName: \"kubernetes.io/projected/99920db5-d382-4159-a705-53428f8a61a8-kube-api-access-h9gqv\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488638 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-combined-ca-bundle\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-config-data\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488711 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-config-data\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488742 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-horizon-secret-key\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bd46062-7573-4651-a59d-f32a136433b8-logs\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-scripts\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dq67\" (UniqueName: \"kubernetes.io/projected/2bd46062-7573-4651-a59d-f32a136433b8-kube-api-access-2dq67\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488882 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-logs\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.488903 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd4pd\" (UniqueName: \"kubernetes.io/projected/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-kube-api-access-qd4pd\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.489583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-config-data\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.489646 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-db-sync-config-data\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.490459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bd46062-7573-4651-a59d-f32a136433b8-logs\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.490761 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99920db5-d382-4159-a705-53428f8a61a8-etc-machine-id\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.494059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-config-data\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.494155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-logs\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.499948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-scripts\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.503179 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-db-sync-config-data\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.509208 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-scripts\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.513810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-horizon-secret-key\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.519717 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-combined-ca-bundle\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.521004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-scripts\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.530265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-config-data\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.541505 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.571389 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66f4bdbdb7-52nlh"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.573579 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.575179 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-combined-ca-bundle\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.584095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-config-data\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.598966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-config-data\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.599029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-scripts\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.599057 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-log-httpd\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.599071 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.599097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.599131 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rds\" (UniqueName: \"kubernetes.io/projected/15e555d8-60bd-48d7-bb21-04133ffa1042-kube-api-access-p2rds\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.599209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-run-httpd\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.623031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dq67\" (UniqueName: \"kubernetes.io/projected/2bd46062-7573-4651-a59d-f32a136433b8-kube-api-access-2dq67\") pod \"placement-db-sync-wrwr6\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.624674 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9gqv\" (UniqueName: \"kubernetes.io/projected/99920db5-d382-4159-a705-53428f8a61a8-kube-api-access-h9gqv\") pod \"cinder-db-sync-pk2zm\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.631689 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd4pd\" (UniqueName: \"kubernetes.io/projected/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-kube-api-access-qd4pd\") pod \"horizon-7cb7678ff9-6sgdj\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.636949 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f4bdbdb7-52nlh"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.673598 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2s8lr"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.674887 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.692682 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x2fwx" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702215 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-run-httpd\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-sb\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702336 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-config-data\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702356 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-nb\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcx5\" (UniqueName: \"kubernetes.io/projected/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-kube-api-access-mmcx5\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702418 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-scripts\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702438 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-log-httpd\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702480 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702514 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-config\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rds\" (UniqueName: \"kubernetes.io/projected/15e555d8-60bd-48d7-bb21-04133ffa1042-kube-api-access-p2rds\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.702591 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-dns-svc\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.704050 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-log-httpd\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.707812 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.708243 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-scripts\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.708886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-run-httpd\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.709312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.725019 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.759301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-config-data\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.770670 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rds\" (UniqueName: \"kubernetes.io/projected/15e555d8-60bd-48d7-bb21-04133ffa1042-kube-api-access-p2rds\") pod \"ceilometer-0\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.785212 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2s8lr"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-nb\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcx5\" (UniqueName: \"kubernetes.io/projected/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-kube-api-access-mmcx5\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68hmf\" (UniqueName: \"kubernetes.io/projected/409e55ac-7906-4f67-ba89-f823a28796a5-kube-api-access-68hmf\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805433 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-config\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-combined-ca-bundle\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-db-sync-config-data\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-dns-svc\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.805576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-sb\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.806424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-sb\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.806449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-nb\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.807021 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-config\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.811275 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77f679bc57-gsx4p"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.812850 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.815517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-dns-svc\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.826523 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.837187 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77f679bc57-gsx4p"] Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.853991 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.856694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcx5\" (UniqueName: \"kubernetes.io/projected/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-kube-api-access-mmcx5\") pod \"dnsmasq-dns-66f4bdbdb7-52nlh\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.879146 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wrwr6" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.901973 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.908647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh9p9\" (UniqueName: \"kubernetes.io/projected/e9473771-24b5-4d5c-8af1-b6eb204b5a14-kube-api-access-sh9p9\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.916581 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9473771-24b5-4d5c-8af1-b6eb204b5a14-horizon-secret-key\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.916713 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-config-data\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.916826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68hmf\" (UniqueName: \"kubernetes.io/projected/409e55ac-7906-4f67-ba89-f823a28796a5-kube-api-access-68hmf\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.917599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9473771-24b5-4d5c-8af1-b6eb204b5a14-logs\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.917773 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-combined-ca-bundle\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.917908 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-db-sync-config-data\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.918272 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-scripts\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.931967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-combined-ca-bundle\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.945870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.947988 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-db-sync-config-data\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:52 crc kubenswrapper[4760]: I1125 08:27:52.967888 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68hmf\" (UniqueName: \"kubernetes.io/projected/409e55ac-7906-4f67-ba89-f823a28796a5-kube-api-access-68hmf\") pod \"barbican-db-sync-2s8lr\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.020688 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-scripts\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.021021 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh9p9\" (UniqueName: \"kubernetes.io/projected/e9473771-24b5-4d5c-8af1-b6eb204b5a14-kube-api-access-sh9p9\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.021060 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9473771-24b5-4d5c-8af1-b6eb204b5a14-horizon-secret-key\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.021083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-config-data\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.021110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9473771-24b5-4d5c-8af1-b6eb204b5a14-logs\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.021525 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9473771-24b5-4d5c-8af1-b6eb204b5a14-logs\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.022332 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-scripts\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.024453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-config-data\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.029443 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9473771-24b5-4d5c-8af1-b6eb204b5a14-horizon-secret-key\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.049444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.066098 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh9p9\" (UniqueName: \"kubernetes.io/projected/e9473771-24b5-4d5c-8af1-b6eb204b5a14-kube-api-access-sh9p9\") pod \"horizon-77f679bc57-gsx4p\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.218731 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.286728 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7xnkp"] Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.429999 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b76c757b7-7vgvc"] Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.525077 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5zhtm"] Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.776058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xnkp" event={"ID":"60e0dc86-edc9-45a5-a429-daa4b2d7343f","Type":"ContainerStarted","Data":"4e393159302693794771ad1a5b1b19ad1fd4b2a50a2c9d6a87fe16f4be93f70a"} Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.778588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" event={"ID":"6d0c97cd-f93c-4f2e-bccb-039b5a48584b","Type":"ContainerStarted","Data":"b482a6a02deb8cdb86be99e01435d44aae9c1d958258f1740c7eb8f9bd826a38"} Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.781722 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zhtm" event={"ID":"5394304b-1d0b-496b-9c30-383d1822341a","Type":"ContainerStarted","Data":"4b27be61c11cd83e473a269d694ae37b63348a7cde1c121551a6012af0c84d86"} Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.791305 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wrwr6"] Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.796850 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cb7678ff9-6sgdj"] Nov 25 08:27:53 crc kubenswrapper[4760]: W1125 08:27:53.805722 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07b20d74_5ea2_4b15_bc05_0aa90875b5ee.slice/crio-6fd0c26049136a51b9d6289a8ff8b0c44ce75a0ab8e0d62e6bda9a3f23046519 WatchSource:0}: Error finding container 6fd0c26049136a51b9d6289a8ff8b0c44ce75a0ab8e0d62e6bda9a3f23046519: Status 404 returned error can't find the container with id 6fd0c26049136a51b9d6289a8ff8b0c44ce75a0ab8e0d62e6bda9a3f23046519 Nov 25 08:27:53 crc kubenswrapper[4760]: W1125 08:27:53.806391 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bd46062_7573_4651_a59d_f32a136433b8.slice/crio-eb392836367417dcba74c833164b223c415f79198cb08f86cbdc9175eebaa6bb WatchSource:0}: Error finding container eb392836367417dcba74c833164b223c415f79198cb08f86cbdc9175eebaa6bb: Status 404 returned error can't find the container with id eb392836367417dcba74c833164b223c415f79198cb08f86cbdc9175eebaa6bb Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.940751 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pk2zm"] Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.953043 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2s8lr"] Nov 25 08:27:53 crc kubenswrapper[4760]: W1125 08:27:53.973163 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e555d8_60bd_48d7_bb21_04133ffa1042.slice/crio-23575b78c5447d02ba668ca021bc202e5676e619817d91f3ee5253d7c3c9b8fa WatchSource:0}: Error finding container 23575b78c5447d02ba668ca021bc202e5676e619817d91f3ee5253d7c3c9b8fa: Status 404 returned error can't find the container with id 23575b78c5447d02ba668ca021bc202e5676e619817d91f3ee5253d7c3c9b8fa Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.975476 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f4bdbdb7-52nlh"] Nov 25 08:27:53 crc kubenswrapper[4760]: I1125 08:27:53.990648 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.148222 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77f679bc57-gsx4p"] Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.528700 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77f679bc57-gsx4p"] Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.582335 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76c448c485-8wvsf"] Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.592762 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.599933 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.608009 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76c448c485-8wvsf"] Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.668269 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqb5\" (UniqueName: \"kubernetes.io/projected/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-kube-api-access-kxqb5\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.668335 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-logs\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.668381 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-scripts\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.668422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-config-data\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.668443 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-horizon-secret-key\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.769746 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-scripts\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.769840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-config-data\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.769895 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-horizon-secret-key\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.769988 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqb5\" (UniqueName: \"kubernetes.io/projected/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-kube-api-access-kxqb5\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.770036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-logs\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.770653 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-logs\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.771198 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-scripts\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.772117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-config-data\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.780868 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-horizon-secret-key\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.835470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqb5\" (UniqueName: \"kubernetes.io/projected/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-kube-api-access-kxqb5\") pod \"horizon-76c448c485-8wvsf\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.836402 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerStarted","Data":"23575b78c5447d02ba668ca021bc202e5676e619817d91f3ee5253d7c3c9b8fa"} Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.842411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2s8lr" event={"ID":"409e55ac-7906-4f67-ba89-f823a28796a5","Type":"ContainerStarted","Data":"9781b73ac409162eef28f79bea48b16e89fbb74f2854af4d34c450834bc04497"} Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.843436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" event={"ID":"21b73fe9-d0be-4c1f-bb9d-567ac13113c8","Type":"ContainerStarted","Data":"595c7ce3257ca2d71044835affabe1af0affd7d1f40d67da6dad12b208b812d5"} Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.844276 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f679bc57-gsx4p" event={"ID":"e9473771-24b5-4d5c-8af1-b6eb204b5a14","Type":"ContainerStarted","Data":"70a5368dcf6b8c766bf794988104e881e86eee3331cf0bf36fd278b40387bc7e"} Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.850510 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wrwr6" event={"ID":"2bd46062-7573-4651-a59d-f32a136433b8","Type":"ContainerStarted","Data":"eb392836367417dcba74c833164b223c415f79198cb08f86cbdc9175eebaa6bb"} Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.863492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pk2zm" event={"ID":"99920db5-d382-4159-a705-53428f8a61a8","Type":"ContainerStarted","Data":"76d97d042d89fc3f957872bdc835ac6dc8c7c3290d9694dc62439f9994e6ab4d"} Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.879199 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cb7678ff9-6sgdj" event={"ID":"07b20d74-5ea2-4b15-bc05-0aa90875b5ee","Type":"ContainerStarted","Data":"6fd0c26049136a51b9d6289a8ff8b0c44ce75a0ab8e0d62e6bda9a3f23046519"} Nov 25 08:27:54 crc kubenswrapper[4760]: I1125 08:27:54.916765 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:27:55 crc kubenswrapper[4760]: I1125 08:27:55.441163 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76c448c485-8wvsf"] Nov 25 08:27:55 crc kubenswrapper[4760]: I1125 08:27:55.887660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c448c485-8wvsf" event={"ID":"fedf7fd8-2ee5-4050-8a0a-548bd6d28765","Type":"ContainerStarted","Data":"5082bb9384bd71cf7c00d3a44b9f22f302d546dfc2a5d37d943e33544207a068"} Nov 25 08:27:55 crc kubenswrapper[4760]: I1125 08:27:55.888996 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" event={"ID":"21b73fe9-d0be-4c1f-bb9d-567ac13113c8","Type":"ContainerStarted","Data":"8c759af517bb41499f996791849ca3fb24b9b1dd20902c3b38793e0a6e3060e3"} Nov 25 08:27:56 crc kubenswrapper[4760]: I1125 08:27:56.901202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zhtm" event={"ID":"5394304b-1d0b-496b-9c30-383d1822341a","Type":"ContainerStarted","Data":"4d3668a9f563fd64a7677aaabdab8e137fa20c640ba55e543801942cdf02eb1a"} Nov 25 08:27:56 crc kubenswrapper[4760]: I1125 08:27:56.905777 4760 generic.go:334] "Generic (PLEG): container finished" podID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerID="8c759af517bb41499f996791849ca3fb24b9b1dd20902c3b38793e0a6e3060e3" exitCode=0 Nov 25 08:27:56 crc kubenswrapper[4760]: I1125 08:27:56.905895 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" event={"ID":"21b73fe9-d0be-4c1f-bb9d-567ac13113c8","Type":"ContainerDied","Data":"8c759af517bb41499f996791849ca3fb24b9b1dd20902c3b38793e0a6e3060e3"} Nov 25 08:27:56 crc kubenswrapper[4760]: I1125 08:27:56.908671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xnkp" event={"ID":"60e0dc86-edc9-45a5-a429-daa4b2d7343f","Type":"ContainerStarted","Data":"ac49ef13ed406feecae306b7cfb175720518c51bb8559a5cd6106c5c3d32fa0a"} Nov 25 08:27:56 crc kubenswrapper[4760]: I1125 08:27:56.912758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" event={"ID":"6d0c97cd-f93c-4f2e-bccb-039b5a48584b","Type":"ContainerDied","Data":"3cc224e52310af2ce7be731bba6712fbe4e21b6c8bd065a8a74357a45a31e0dd"} Nov 25 08:27:56 crc kubenswrapper[4760]: I1125 08:27:56.910923 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d0c97cd-f93c-4f2e-bccb-039b5a48584b" containerID="3cc224e52310af2ce7be731bba6712fbe4e21b6c8bd065a8a74357a45a31e0dd" exitCode=0 Nov 25 08:27:56 crc kubenswrapper[4760]: I1125 08:27:56.934688 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5zhtm" podStartSLOduration=4.934428821 podStartE2EDuration="4.934428821s" podCreationTimestamp="2025-11-25 08:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:27:56.92396711 +0000 UTC m=+1010.632997915" watchObservedRunningTime="2025-11-25 08:27:56.934428821 +0000 UTC m=+1010.643459646" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:56.990066 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7xnkp" podStartSLOduration=5.990050235 podStartE2EDuration="5.990050235s" podCreationTimestamp="2025-11-25 08:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:27:56.962748527 +0000 UTC m=+1010.671779332" watchObservedRunningTime="2025-11-25 08:27:56.990050235 +0000 UTC m=+1010.699081030" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.381028 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.424395 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7cql\" (UniqueName: \"kubernetes.io/projected/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-kube-api-access-p7cql\") pod \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.424442 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-nb\") pod \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.424537 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-sb\") pod \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.424569 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-config\") pod \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.424599 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-dns-svc\") pod \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\" (UID: \"6d0c97cd-f93c-4f2e-bccb-039b5a48584b\") " Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.434427 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-kube-api-access-p7cql" (OuterVolumeSpecName: "kube-api-access-p7cql") pod "6d0c97cd-f93c-4f2e-bccb-039b5a48584b" (UID: "6d0c97cd-f93c-4f2e-bccb-039b5a48584b"). InnerVolumeSpecName "kube-api-access-p7cql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.447772 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d0c97cd-f93c-4f2e-bccb-039b5a48584b" (UID: "6d0c97cd-f93c-4f2e-bccb-039b5a48584b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.450969 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d0c97cd-f93c-4f2e-bccb-039b5a48584b" (UID: "6d0c97cd-f93c-4f2e-bccb-039b5a48584b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.460835 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-config" (OuterVolumeSpecName: "config") pod "6d0c97cd-f93c-4f2e-bccb-039b5a48584b" (UID: "6d0c97cd-f93c-4f2e-bccb-039b5a48584b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.465037 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d0c97cd-f93c-4f2e-bccb-039b5a48584b" (UID: "6d0c97cd-f93c-4f2e-bccb-039b5a48584b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.525456 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.525487 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7cql\" (UniqueName: \"kubernetes.io/projected/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-kube-api-access-p7cql\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.525500 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.525509 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.525520 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d0c97cd-f93c-4f2e-bccb-039b5a48584b-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.974168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" event={"ID":"6d0c97cd-f93c-4f2e-bccb-039b5a48584b","Type":"ContainerDied","Data":"b482a6a02deb8cdb86be99e01435d44aae9c1d958258f1740c7eb8f9bd826a38"} Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.974279 4760 scope.go:117] "RemoveContainer" containerID="3cc224e52310af2ce7be731bba6712fbe4e21b6c8bd065a8a74357a45a31e0dd" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.974417 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b76c757b7-7vgvc" Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.995878 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" event={"ID":"21b73fe9-d0be-4c1f-bb9d-567ac13113c8","Type":"ContainerStarted","Data":"5bc1b535d6f0fea6baabe1bb6de1d66f7b0b43ff47795d0a51c97d1b393af140"} Nov 25 08:27:57 crc kubenswrapper[4760]: I1125 08:27:57.999689 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:27:58 crc kubenswrapper[4760]: I1125 08:27:58.034086 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" podStartSLOduration=6.034071484 podStartE2EDuration="6.034071484s" podCreationTimestamp="2025-11-25 08:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:27:58.031636697 +0000 UTC m=+1011.740667492" watchObservedRunningTime="2025-11-25 08:27:58.034071484 +0000 UTC m=+1011.743102279" Nov 25 08:27:58 crc kubenswrapper[4760]: I1125 08:27:58.103978 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b76c757b7-7vgvc"] Nov 25 08:27:58 crc kubenswrapper[4760]: I1125 08:27:58.111843 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b76c757b7-7vgvc"] Nov 25 08:27:58 crc kubenswrapper[4760]: I1125 08:27:58.965111 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0c97cd-f93c-4f2e-bccb-039b5a48584b" path="/var/lib/kubelet/pods/6d0c97cd-f93c-4f2e-bccb-039b5a48584b/volumes" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.169288 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cb7678ff9-6sgdj"] Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.207503 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b7dd9bf58-zdxgq"] Nov 25 08:28:01 crc kubenswrapper[4760]: E1125 08:28:01.207859 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0c97cd-f93c-4f2e-bccb-039b5a48584b" containerName="init" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.207871 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0c97cd-f93c-4f2e-bccb-039b5a48584b" containerName="init" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.208016 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0c97cd-f93c-4f2e-bccb-039b5a48584b" containerName="init" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.210349 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.212060 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.233519 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7dd9bf58-zdxgq"] Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.291826 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76c448c485-8wvsf"] Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.320989 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmfn\" (UniqueName: \"kubernetes.io/projected/fed86ba5-c330-411e-bab0-88e86ceb8980-kube-api-access-mnmfn\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.321839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed86ba5-c330-411e-bab0-88e86ceb8980-logs\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.322064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-scripts\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.322124 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-tls-certs\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.322214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-combined-ca-bundle\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.322267 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-config-data\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.322291 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-secret-key\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.329770 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6655684d54-8jfvz"] Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.331195 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.354756 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6655684d54-8jfvz"] Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-scripts\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425394 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-tls-certs\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425444 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-combined-ca-bundle\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425466 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-config-data\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425481 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-secret-key\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425515 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmfn\" (UniqueName: \"kubernetes.io/projected/fed86ba5-c330-411e-bab0-88e86ceb8980-kube-api-access-mnmfn\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed86ba5-c330-411e-bab0-88e86ceb8980-logs\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.425908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed86ba5-c330-411e-bab0-88e86ceb8980-logs\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.426104 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-scripts\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.427086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-config-data\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.433093 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-tls-certs\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.433192 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-combined-ca-bundle\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.435771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-secret-key\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.446645 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmfn\" (UniqueName: \"kubernetes.io/projected/fed86ba5-c330-411e-bab0-88e86ceb8980-kube-api-access-mnmfn\") pod \"horizon-7b7dd9bf58-zdxgq\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.527307 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-scripts\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.528056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngldx\" (UniqueName: \"kubernetes.io/projected/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-kube-api-access-ngldx\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.528136 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-combined-ca-bundle\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.528187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-horizon-secret-key\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.528275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-config-data\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.528320 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-horizon-tls-certs\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.528346 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-logs\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.548291 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.629966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-scripts\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.630014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngldx\" (UniqueName: \"kubernetes.io/projected/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-kube-api-access-ngldx\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.630059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-combined-ca-bundle\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.630089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-horizon-secret-key\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.630142 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-config-data\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.630195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-horizon-tls-certs\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.630222 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-logs\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.630795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-logs\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.632133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-config-data\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.632590 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-scripts\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.634717 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-horizon-tls-certs\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.635470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-combined-ca-bundle\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.640711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-horizon-secret-key\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.666882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngldx\" (UniqueName: \"kubernetes.io/projected/0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc-kube-api-access-ngldx\") pod \"horizon-6655684d54-8jfvz\" (UID: \"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc\") " pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.668859 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.746513 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:28:01 crc kubenswrapper[4760]: I1125 08:28:01.746563 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:28:02 crc kubenswrapper[4760]: I1125 08:28:02.056714 4760 generic.go:334] "Generic (PLEG): container finished" podID="60e0dc86-edc9-45a5-a429-daa4b2d7343f" containerID="ac49ef13ed406feecae306b7cfb175720518c51bb8559a5cd6106c5c3d32fa0a" exitCode=0 Nov 25 08:28:02 crc kubenswrapper[4760]: I1125 08:28:02.056771 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xnkp" event={"ID":"60e0dc86-edc9-45a5-a429-daa4b2d7343f","Type":"ContainerDied","Data":"ac49ef13ed406feecae306b7cfb175720518c51bb8559a5cd6106c5c3d32fa0a"} Nov 25 08:28:02 crc kubenswrapper[4760]: I1125 08:28:02.956553 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:28:03 crc kubenswrapper[4760]: I1125 08:28:03.047529 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-569d458467-g8shq"] Nov 25 08:28:03 crc kubenswrapper[4760]: I1125 08:28:03.047793 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-569d458467-g8shq" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="dnsmasq-dns" containerID="cri-o://c9f5d9d79d2bb3441060848f7fd44891b54ea159d1672c2b86a769e1629f6a65" gracePeriod=10 Nov 25 08:28:03 crc kubenswrapper[4760]: I1125 08:28:03.162721 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-569d458467-g8shq" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Nov 25 08:28:04 crc kubenswrapper[4760]: I1125 08:28:04.099328 4760 generic.go:334] "Generic (PLEG): container finished" podID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerID="c9f5d9d79d2bb3441060848f7fd44891b54ea159d1672c2b86a769e1629f6a65" exitCode=0 Nov 25 08:28:04 crc kubenswrapper[4760]: I1125 08:28:04.099407 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-569d458467-g8shq" event={"ID":"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483","Type":"ContainerDied","Data":"c9f5d9d79d2bb3441060848f7fd44891b54ea159d1672c2b86a769e1629f6a65"} Nov 25 08:28:13 crc kubenswrapper[4760]: I1125 08:28:13.161949 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-569d458467-g8shq" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Nov 25 08:28:15 crc kubenswrapper[4760]: E1125 08:28:15.933915 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057" Nov 25 08:28:15 crc kubenswrapper[4760]: E1125 08:28:15.934457 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n677h5f6h658h668h567h55fh687h5d9h9h59dh677h59dh78h4h57bh64bh675h59dh64ch5bh54fh5f4hf4hcdh89h668hc8h99hddh59dh56bh56fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh9p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77f679bc57-gsx4p_openstack(e9473771-24b5-4d5c-8af1-b6eb204b5a14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:28:15 crc kubenswrapper[4760]: E1125 08:28:15.936749 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057\\\"\"]" pod="openstack/horizon-77f679bc57-gsx4p" podUID="e9473771-24b5-4d5c-8af1-b6eb204b5a14" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.002749 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.135210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-scripts\") pod \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.135727 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-credential-keys\") pod \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.135780 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-config-data\") pod \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.135846 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-fernet-keys\") pod \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.135887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-combined-ca-bundle\") pod \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.135915 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g2cb\" (UniqueName: \"kubernetes.io/projected/60e0dc86-edc9-45a5-a429-daa4b2d7343f-kube-api-access-5g2cb\") pod \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\" (UID: \"60e0dc86-edc9-45a5-a429-daa4b2d7343f\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.141819 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60e0dc86-edc9-45a5-a429-daa4b2d7343f" (UID: "60e0dc86-edc9-45a5-a429-daa4b2d7343f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.143507 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60e0dc86-edc9-45a5-a429-daa4b2d7343f" (UID: "60e0dc86-edc9-45a5-a429-daa4b2d7343f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.150951 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e0dc86-edc9-45a5-a429-daa4b2d7343f-kube-api-access-5g2cb" (OuterVolumeSpecName: "kube-api-access-5g2cb") pod "60e0dc86-edc9-45a5-a429-daa4b2d7343f" (UID: "60e0dc86-edc9-45a5-a429-daa4b2d7343f"). InnerVolumeSpecName "kube-api-access-5g2cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.150797 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-scripts" (OuterVolumeSpecName: "scripts") pod "60e0dc86-edc9-45a5-a429-daa4b2d7343f" (UID: "60e0dc86-edc9-45a5-a429-daa4b2d7343f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.167824 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-config-data" (OuterVolumeSpecName: "config-data") pod "60e0dc86-edc9-45a5-a429-daa4b2d7343f" (UID: "60e0dc86-edc9-45a5-a429-daa4b2d7343f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.177408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60e0dc86-edc9-45a5-a429-daa4b2d7343f" (UID: "60e0dc86-edc9-45a5-a429-daa4b2d7343f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.210808 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7xnkp" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.210807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7xnkp" event={"ID":"60e0dc86-edc9-45a5-a429-daa4b2d7343f","Type":"ContainerDied","Data":"4e393159302693794771ad1a5b1b19ad1fd4b2a50a2c9d6a87fe16f4be93f70a"} Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.210859 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e393159302693794771ad1a5b1b19ad1fd4b2a50a2c9d6a87fe16f4be93f70a" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.242109 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.242143 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.242154 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.242169 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g2cb\" (UniqueName: \"kubernetes.io/projected/60e0dc86-edc9-45a5-a429-daa4b2d7343f-kube-api-access-5g2cb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.242178 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.242187 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60e0dc86-edc9-45a5-a429-daa4b2d7343f-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.501681 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.501834 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68hmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2s8lr_openstack(409e55ac-7906-4f67-ba89-f823a28796a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.503152 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2s8lr" podUID="409e55ac-7906-4f67-ba89-f823a28796a5" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.509854 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.510017 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb8h57dhdfh59bh78h684h69hd9h649h5cdh66bh5cdh9h58dh549h55dh57fh54ch549h59dh587hc8h5bbh8bh75h8h5c7h684hc6h56bh695h74q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxqb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-76c448c485-8wvsf_openstack(fedf7fd8-2ee5-4050-8a0a-548bd6d28765): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.513609 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057\\\"\"]" pod="openstack/horizon-76c448c485-8wvsf" podUID="fedf7fd8-2ee5-4050-8a0a-548bd6d28765" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.526089 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.526225 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc7h7fh5fh554h588hf4h5b7h54dh669h5c7h67bh5d5h64h669h6ch5dfh87hch5b9h599h577h5dfhc6h658h5c9h8dh56h96h57fh5bbh59fh7cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qd4pd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7cb7678ff9-6sgdj_openstack(07b20d74-5ea2-4b15-bc05-0aa90875b5ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.532157 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057\\\"\"]" pod="openstack/horizon-7cb7678ff9-6sgdj" podUID="07b20d74-5ea2-4b15-bc05-0aa90875b5ee" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.540208 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.648973 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-config-data\") pod \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.649361 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9473771-24b5-4d5c-8af1-b6eb204b5a14-logs\") pod \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.649530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-scripts\") pod \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.649578 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh9p9\" (UniqueName: \"kubernetes.io/projected/e9473771-24b5-4d5c-8af1-b6eb204b5a14-kube-api-access-sh9p9\") pod \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.649602 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9473771-24b5-4d5c-8af1-b6eb204b5a14-horizon-secret-key\") pod \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\" (UID: \"e9473771-24b5-4d5c-8af1-b6eb204b5a14\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.649675 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-config-data" (OuterVolumeSpecName: "config-data") pod "e9473771-24b5-4d5c-8af1-b6eb204b5a14" (UID: "e9473771-24b5-4d5c-8af1-b6eb204b5a14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.649718 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9473771-24b5-4d5c-8af1-b6eb204b5a14-logs" (OuterVolumeSpecName: "logs") pod "e9473771-24b5-4d5c-8af1-b6eb204b5a14" (UID: "e9473771-24b5-4d5c-8af1-b6eb204b5a14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.650020 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.650041 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9473771-24b5-4d5c-8af1-b6eb204b5a14-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.650214 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-scripts" (OuterVolumeSpecName: "scripts") pod "e9473771-24b5-4d5c-8af1-b6eb204b5a14" (UID: "e9473771-24b5-4d5c-8af1-b6eb204b5a14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.653502 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9473771-24b5-4d5c-8af1-b6eb204b5a14-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e9473771-24b5-4d5c-8af1-b6eb204b5a14" (UID: "e9473771-24b5-4d5c-8af1-b6eb204b5a14"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.653733 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9473771-24b5-4d5c-8af1-b6eb204b5a14-kube-api-access-sh9p9" (OuterVolumeSpecName: "kube-api-access-sh9p9") pod "e9473771-24b5-4d5c-8af1-b6eb204b5a14" (UID: "e9473771-24b5-4d5c-8af1-b6eb204b5a14"). InnerVolumeSpecName "kube-api-access-sh9p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.751160 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9473771-24b5-4d5c-8af1-b6eb204b5a14-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.751205 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh9p9\" (UniqueName: \"kubernetes.io/projected/e9473771-24b5-4d5c-8af1-b6eb204b5a14-kube-api-access-sh9p9\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.751263 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9473771-24b5-4d5c-8af1-b6eb204b5a14-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.830378 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140" Nov 25 08:28:16 crc kubenswrapper[4760]: E1125 08:28:16.830577 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68hf7h5d4h5b4h68dh54bhfdhc9h675h594h8fhf8h96hc5h659h99h5b7h55bh674h5d5h58dhd4h65h6hb4hbbh7ch55bh594hfch5cdh85q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2rds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(15e555d8-60bd-48d7-bb21-04133ffa1042): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.836764 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.954619 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-nb\") pod \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.954671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-config\") pod \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.954715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-sb\") pod \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.954768 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmvrc\" (UniqueName: \"kubernetes.io/projected/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-kube-api-access-hmvrc\") pod \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.954795 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-dns-svc\") pod \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\" (UID: \"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483\") " Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.963823 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-kube-api-access-hmvrc" (OuterVolumeSpecName: "kube-api-access-hmvrc") pod "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" (UID: "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483"). InnerVolumeSpecName "kube-api-access-hmvrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.996298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-config" (OuterVolumeSpecName: "config") pod "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" (UID: "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:16 crc kubenswrapper[4760]: I1125 08:28:16.998280 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" (UID: "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.001017 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" (UID: "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.002661 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" (UID: "1b52a8e6-0370-4e9c-81f3-3ab4c64a7483"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.058442 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmvrc\" (UniqueName: \"kubernetes.io/projected/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-kube-api-access-hmvrc\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.058472 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.058481 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.058489 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.058498 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.090823 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7xnkp"] Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.097135 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7xnkp"] Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.181525 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hhsz8"] Nov 25 08:28:17 crc kubenswrapper[4760]: E1125 08:28:17.181893 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="init" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.181911 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="init" Nov 25 08:28:17 crc kubenswrapper[4760]: E1125 08:28:17.181932 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="dnsmasq-dns" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.181939 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="dnsmasq-dns" Nov 25 08:28:17 crc kubenswrapper[4760]: E1125 08:28:17.181965 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e0dc86-edc9-45a5-a429-daa4b2d7343f" containerName="keystone-bootstrap" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.181972 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e0dc86-edc9-45a5-a429-daa4b2d7343f" containerName="keystone-bootstrap" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.182131 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e0dc86-edc9-45a5-a429-daa4b2d7343f" containerName="keystone-bootstrap" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.182147 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="dnsmasq-dns" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.182706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.184603 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.184623 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.184897 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.187569 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.187851 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sbjbt" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.188590 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hhsz8"] Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.221155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-569d458467-g8shq" event={"ID":"1b52a8e6-0370-4e9c-81f3-3ab4c64a7483","Type":"ContainerDied","Data":"5eff68f5408e0c2f909387a6e786bc3a60a89cb2df1d00c83befb4570d335073"} Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.221215 4760 scope.go:117] "RemoveContainer" containerID="c9f5d9d79d2bb3441060848f7fd44891b54ea159d1672c2b86a769e1629f6a65" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.221171 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-569d458467-g8shq" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.222762 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77f679bc57-gsx4p" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.223330 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77f679bc57-gsx4p" event={"ID":"e9473771-24b5-4d5c-8af1-b6eb204b5a14","Type":"ContainerDied","Data":"70a5368dcf6b8c766bf794988104e881e86eee3331cf0bf36fd278b40387bc7e"} Nov 25 08:28:17 crc kubenswrapper[4760]: E1125 08:28:17.223998 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645\\\"\"" pod="openstack/barbican-db-sync-2s8lr" podUID="409e55ac-7906-4f67-ba89-f823a28796a5" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.313473 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-569d458467-g8shq"] Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.319305 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-569d458467-g8shq"] Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.362770 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77f679bc57-gsx4p"] Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.370779 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77f679bc57-gsx4p"] Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.372317 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-combined-ca-bundle\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.372407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-scripts\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.372440 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhbb\" (UniqueName: \"kubernetes.io/projected/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-kube-api-access-xvhbb\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.372548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-config-data\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.372641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-credential-keys\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.372780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-fernet-keys\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.474034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-combined-ca-bundle\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.474103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-scripts\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.474143 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvhbb\" (UniqueName: \"kubernetes.io/projected/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-kube-api-access-xvhbb\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.474186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-config-data\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.474465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-credential-keys\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.474604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-fernet-keys\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.479691 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-fernet-keys\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.479958 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-credential-keys\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.480188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-scripts\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.480193 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-combined-ca-bundle\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.480657 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-config-data\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.491892 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvhbb\" (UniqueName: \"kubernetes.io/projected/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-kube-api-access-xvhbb\") pod \"keystone-bootstrap-hhsz8\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:17 crc kubenswrapper[4760]: I1125 08:28:17.500754 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:18 crc kubenswrapper[4760]: I1125 08:28:18.163506 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-569d458467-g8shq" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Nov 25 08:28:18 crc kubenswrapper[4760]: I1125 08:28:18.947891 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b52a8e6-0370-4e9c-81f3-3ab4c64a7483" path="/var/lib/kubelet/pods/1b52a8e6-0370-4e9c-81f3-3ab4c64a7483/volumes" Nov 25 08:28:18 crc kubenswrapper[4760]: I1125 08:28:18.948709 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e0dc86-edc9-45a5-a429-daa4b2d7343f" path="/var/lib/kubelet/pods/60e0dc86-edc9-45a5-a429-daa4b2d7343f/volumes" Nov 25 08:28:18 crc kubenswrapper[4760]: I1125 08:28:18.949422 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9473771-24b5-4d5c-8af1-b6eb204b5a14" path="/var/lib/kubelet/pods/e9473771-24b5-4d5c-8af1-b6eb204b5a14/volumes" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.679642 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.706744 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854005 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-horizon-secret-key\") pod \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854151 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-config-data\") pod \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854180 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd4pd\" (UniqueName: \"kubernetes.io/projected/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-kube-api-access-qd4pd\") pod \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854236 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-logs\") pod \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854374 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-scripts\") pod \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854450 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-logs\") pod \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\" (UID: \"07b20d74-5ea2-4b15-bc05-0aa90875b5ee\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854497 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-horizon-secret-key\") pod \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854522 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-scripts\") pod \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854549 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-config-data\") pod \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854582 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxqb5\" (UniqueName: \"kubernetes.io/projected/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-kube-api-access-kxqb5\") pod \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\" (UID: \"fedf7fd8-2ee5-4050-8a0a-548bd6d28765\") " Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.854666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-logs" (OuterVolumeSpecName: "logs") pod "fedf7fd8-2ee5-4050-8a0a-548bd6d28765" (UID: "fedf7fd8-2ee5-4050-8a0a-548bd6d28765"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.855236 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-scripts" (OuterVolumeSpecName: "scripts") pod "fedf7fd8-2ee5-4050-8a0a-548bd6d28765" (UID: "fedf7fd8-2ee5-4050-8a0a-548bd6d28765"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.855441 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-logs" (OuterVolumeSpecName: "logs") pod "07b20d74-5ea2-4b15-bc05-0aa90875b5ee" (UID: "07b20d74-5ea2-4b15-bc05-0aa90875b5ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.855484 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-config-data" (OuterVolumeSpecName: "config-data") pod "fedf7fd8-2ee5-4050-8a0a-548bd6d28765" (UID: "fedf7fd8-2ee5-4050-8a0a-548bd6d28765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.855536 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-scripts" (OuterVolumeSpecName: "scripts") pod "07b20d74-5ea2-4b15-bc05-0aa90875b5ee" (UID: "07b20d74-5ea2-4b15-bc05-0aa90875b5ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.856466 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-config-data" (OuterVolumeSpecName: "config-data") pod "07b20d74-5ea2-4b15-bc05-0aa90875b5ee" (UID: "07b20d74-5ea2-4b15-bc05-0aa90875b5ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.858039 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.858062 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.858073 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.858081 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.858090 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.858098 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.859007 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fedf7fd8-2ee5-4050-8a0a-548bd6d28765" (UID: "fedf7fd8-2ee5-4050-8a0a-548bd6d28765"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.859078 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-kube-api-access-kxqb5" (OuterVolumeSpecName: "kube-api-access-kxqb5") pod "fedf7fd8-2ee5-4050-8a0a-548bd6d28765" (UID: "fedf7fd8-2ee5-4050-8a0a-548bd6d28765"). InnerVolumeSpecName "kube-api-access-kxqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.860567 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "07b20d74-5ea2-4b15-bc05-0aa90875b5ee" (UID: "07b20d74-5ea2-4b15-bc05-0aa90875b5ee"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.886977 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-kube-api-access-qd4pd" (OuterVolumeSpecName: "kube-api-access-qd4pd") pod "07b20d74-5ea2-4b15-bc05-0aa90875b5ee" (UID: "07b20d74-5ea2-4b15-bc05-0aa90875b5ee"). InnerVolumeSpecName "kube-api-access-qd4pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.959659 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.959709 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxqb5\" (UniqueName: \"kubernetes.io/projected/fedf7fd8-2ee5-4050-8a0a-548bd6d28765-kube-api-access-kxqb5\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.959729 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:27 crc kubenswrapper[4760]: I1125 08:28:27.959742 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd4pd\" (UniqueName: \"kubernetes.io/projected/07b20d74-5ea2-4b15-bc05-0aa90875b5ee-kube-api-access-qd4pd\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.311405 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cb7678ff9-6sgdj" event={"ID":"07b20d74-5ea2-4b15-bc05-0aa90875b5ee","Type":"ContainerDied","Data":"6fd0c26049136a51b9d6289a8ff8b0c44ce75a0ab8e0d62e6bda9a3f23046519"} Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.311422 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb7678ff9-6sgdj" Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.312959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76c448c485-8wvsf" event={"ID":"fedf7fd8-2ee5-4050-8a0a-548bd6d28765","Type":"ContainerDied","Data":"5082bb9384bd71cf7c00d3a44b9f22f302d546dfc2a5d37d943e33544207a068"} Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.313116 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76c448c485-8wvsf" Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.376834 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cb7678ff9-6sgdj"] Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.383609 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cb7678ff9-6sgdj"] Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.412492 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76c448c485-8wvsf"] Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.422065 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76c448c485-8wvsf"] Nov 25 08:28:28 crc kubenswrapper[4760]: I1125 08:28:28.671398 4760 scope.go:117] "RemoveContainer" containerID="a9611e525499a3ad3bc15bcae667b51434fcc10e70e1e7b825b3cb7e11e9b3cf" Nov 25 08:28:28 crc kubenswrapper[4760]: E1125 08:28:28.704003 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 25 08:28:28 crc kubenswrapper[4760]: E1125 08:28:28.704238 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9gqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pk2zm_openstack(99920db5-d382-4159-a705-53428f8a61a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:28:28 crc kubenswrapper[4760]: E1125 08:28:28.705859 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pk2zm" podUID="99920db5-d382-4159-a705-53428f8a61a8" Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:28.948636 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b20d74-5ea2-4b15-bc05-0aa90875b5ee" path="/var/lib/kubelet/pods/07b20d74-5ea2-4b15-bc05-0aa90875b5ee/volumes" Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:28.949392 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedf7fd8-2ee5-4050-8a0a-548bd6d28765" path="/var/lib/kubelet/pods/fedf7fd8-2ee5-4050-8a0a-548bd6d28765/volumes" Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.109073 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6655684d54-8jfvz"] Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.113734 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7dd9bf58-zdxgq"] Nov 25 08:28:29 crc kubenswrapper[4760]: W1125 08:28:29.128055 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bbd9fea_6104_467c_8ce2_6f9be5ff8bfc.slice/crio-71b69d9d49528c4f5d6e1d740b2d1fa2f3356f626be475f407c2c73a52af624c WatchSource:0}: Error finding container 71b69d9d49528c4f5d6e1d740b2d1fa2f3356f626be475f407c2c73a52af624c: Status 404 returned error can't find the container with id 71b69d9d49528c4f5d6e1d740b2d1fa2f3356f626be475f407c2c73a52af624c Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.321396 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerStarted","Data":"063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730"} Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.322900 4760 generic.go:334] "Generic (PLEG): container finished" podID="5394304b-1d0b-496b-9c30-383d1822341a" containerID="4d3668a9f563fd64a7677aaabdab8e137fa20c640ba55e543801942cdf02eb1a" exitCode=0 Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.322974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zhtm" event={"ID":"5394304b-1d0b-496b-9c30-383d1822341a","Type":"ContainerDied","Data":"4d3668a9f563fd64a7677aaabdab8e137fa20c640ba55e543801942cdf02eb1a"} Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.325401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2s8lr" event={"ID":"409e55ac-7906-4f67-ba89-f823a28796a5","Type":"ContainerStarted","Data":"bb61ac46168e741100342fbac117cf81c11118cea45d5591b125d12a72af1ccf"} Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.327578 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7dd9bf58-zdxgq" event={"ID":"fed86ba5-c330-411e-bab0-88e86ceb8980","Type":"ContainerStarted","Data":"972880981c9a6c24e0cd0bc733a9a2b6616e2443144d6f2acdd0559e2010370c"} Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.328835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6655684d54-8jfvz" event={"ID":"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc","Type":"ContainerStarted","Data":"71b69d9d49528c4f5d6e1d740b2d1fa2f3356f626be475f407c2c73a52af624c"} Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.330929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wrwr6" event={"ID":"2bd46062-7573-4651-a59d-f32a136433b8","Type":"ContainerStarted","Data":"8b52c754a29627617f737cb3ed7b115a0a7494c96d3e266b2719d8e4dac85d8c"} Nov 25 08:28:29 crc kubenswrapper[4760]: E1125 08:28:29.332691 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-pk2zm" podUID="99920db5-d382-4159-a705-53428f8a61a8" Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.358480 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2s8lr" podStartSLOduration=2.1715595309999998 podStartE2EDuration="37.358457226s" podCreationTimestamp="2025-11-25 08:27:52 +0000 UTC" firstStartedPulling="2025-11-25 08:27:53.964142894 +0000 UTC m=+1007.673173689" lastFinishedPulling="2025-11-25 08:28:29.151040589 +0000 UTC m=+1042.860071384" observedRunningTime="2025-11-25 08:28:29.353980882 +0000 UTC m=+1043.063011677" watchObservedRunningTime="2025-11-25 08:28:29.358457226 +0000 UTC m=+1043.067488021" Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.390210 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wrwr6" podStartSLOduration=3.612734064 podStartE2EDuration="37.390189017s" podCreationTimestamp="2025-11-25 08:27:52 +0000 UTC" firstStartedPulling="2025-11-25 08:27:53.813180093 +0000 UTC m=+1007.522210888" lastFinishedPulling="2025-11-25 08:28:27.590635046 +0000 UTC m=+1041.299665841" observedRunningTime="2025-11-25 08:28:29.385822386 +0000 UTC m=+1043.094853181" watchObservedRunningTime="2025-11-25 08:28:29.390189017 +0000 UTC m=+1043.099219812" Nov 25 08:28:29 crc kubenswrapper[4760]: I1125 08:28:29.978897 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hhsz8"] Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.342535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7dd9bf58-zdxgq" event={"ID":"fed86ba5-c330-411e-bab0-88e86ceb8980","Type":"ContainerStarted","Data":"fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d"} Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.342635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7dd9bf58-zdxgq" event={"ID":"fed86ba5-c330-411e-bab0-88e86ceb8980","Type":"ContainerStarted","Data":"d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32"} Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.345974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhsz8" event={"ID":"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6","Type":"ContainerStarted","Data":"ef00cb0f2c9d7a1457a895997b1430dc50e50688832895c39ed22244c166088d"} Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.346322 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhsz8" event={"ID":"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6","Type":"ContainerStarted","Data":"d7e61e2a593a72bb9a37bbb88f4326ad951d15351447bc699a28b160b6ba9622"} Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.351470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6655684d54-8jfvz" event={"ID":"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc","Type":"ContainerStarted","Data":"66798ce051cf16f069786b201d5523872f4e0384a41d88315890cc9a287e3bc7"} Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.351533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6655684d54-8jfvz" event={"ID":"0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc","Type":"ContainerStarted","Data":"3d2c6444b95f7d3a0a024b74c56e799b65ad3bb5d7b94a5c360d671593febe36"} Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.378997 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b7dd9bf58-zdxgq" podStartSLOduration=28.874969143 podStartE2EDuration="29.378972333s" podCreationTimestamp="2025-11-25 08:28:01 +0000 UTC" firstStartedPulling="2025-11-25 08:28:29.134439428 +0000 UTC m=+1042.843470223" lastFinishedPulling="2025-11-25 08:28:29.638442618 +0000 UTC m=+1043.347473413" observedRunningTime="2025-11-25 08:28:30.365597262 +0000 UTC m=+1044.074628077" watchObservedRunningTime="2025-11-25 08:28:30.378972333 +0000 UTC m=+1044.088003128" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.392714 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hhsz8" podStartSLOduration=13.392692954 podStartE2EDuration="13.392692954s" podCreationTimestamp="2025-11-25 08:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:30.390412801 +0000 UTC m=+1044.099443596" watchObservedRunningTime="2025-11-25 08:28:30.392692954 +0000 UTC m=+1044.101723749" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.412193 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6655684d54-8jfvz" podStartSLOduration=28.920817087 podStartE2EDuration="29.412173855s" podCreationTimestamp="2025-11-25 08:28:01 +0000 UTC" firstStartedPulling="2025-11-25 08:28:29.14854489 +0000 UTC m=+1042.857575685" lastFinishedPulling="2025-11-25 08:28:29.639901668 +0000 UTC m=+1043.348932453" observedRunningTime="2025-11-25 08:28:30.408886874 +0000 UTC m=+1044.117917679" watchObservedRunningTime="2025-11-25 08:28:30.412173855 +0000 UTC m=+1044.121204650" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.619706 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.720395 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pmw8\" (UniqueName: \"kubernetes.io/projected/5394304b-1d0b-496b-9c30-383d1822341a-kube-api-access-2pmw8\") pod \"5394304b-1d0b-496b-9c30-383d1822341a\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.720869 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-combined-ca-bundle\") pod \"5394304b-1d0b-496b-9c30-383d1822341a\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.720971 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-config\") pod \"5394304b-1d0b-496b-9c30-383d1822341a\" (UID: \"5394304b-1d0b-496b-9c30-383d1822341a\") " Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.748520 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5394304b-1d0b-496b-9c30-383d1822341a-kube-api-access-2pmw8" (OuterVolumeSpecName: "kube-api-access-2pmw8") pod "5394304b-1d0b-496b-9c30-383d1822341a" (UID: "5394304b-1d0b-496b-9c30-383d1822341a"). InnerVolumeSpecName "kube-api-access-2pmw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.779756 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5394304b-1d0b-496b-9c30-383d1822341a" (UID: "5394304b-1d0b-496b-9c30-383d1822341a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.810299 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-config" (OuterVolumeSpecName: "config") pod "5394304b-1d0b-496b-9c30-383d1822341a" (UID: "5394304b-1d0b-496b-9c30-383d1822341a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.823496 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.823541 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pmw8\" (UniqueName: \"kubernetes.io/projected/5394304b-1d0b-496b-9c30-383d1822341a-kube-api-access-2pmw8\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:30 crc kubenswrapper[4760]: I1125 08:28:30.823559 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394304b-1d0b-496b-9c30-383d1822341a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.368725 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5zhtm" event={"ID":"5394304b-1d0b-496b-9c30-383d1822341a","Type":"ContainerDied","Data":"4b27be61c11cd83e473a269d694ae37b63348a7cde1c121551a6012af0c84d86"} Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.368769 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b27be61c11cd83e473a269d694ae37b63348a7cde1c121551a6012af0c84d86" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.368871 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5zhtm" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.495934 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6677d66f85-v9q29"] Nov 25 08:28:31 crc kubenswrapper[4760]: E1125 08:28:31.496515 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5394304b-1d0b-496b-9c30-383d1822341a" containerName="neutron-db-sync" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.496532 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5394304b-1d0b-496b-9c30-383d1822341a" containerName="neutron-db-sync" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.496695 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5394304b-1d0b-496b-9c30-383d1822341a" containerName="neutron-db-sync" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.503830 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.520703 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6677d66f85-v9q29"] Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.542016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-config\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.542113 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brj7p\" (UniqueName: \"kubernetes.io/projected/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-kube-api-access-brj7p\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.542222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-nb\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.542491 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-dns-svc\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.542522 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-sb\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.548773 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.548868 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.646165 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-config\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.646262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brj7p\" (UniqueName: \"kubernetes.io/projected/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-kube-api-access-brj7p\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.646332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-nb\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.646392 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-dns-svc\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.646414 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-sb\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.647195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-sb\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.647749 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-config\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.648063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-dns-svc\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.648109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-nb\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.671476 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.671526 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.675083 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brj7p\" (UniqueName: \"kubernetes.io/projected/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-kube-api-access-brj7p\") pod \"dnsmasq-dns-6677d66f85-v9q29\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.735210 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7ff756f59b-f8nvt"] Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.737547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.741094 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.742170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.742346 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.742355 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ljtn8" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.748616 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.748678 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.748724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.749210 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9e0ecc3c247b6af19eb122bc74a94901ef917b6bb9d5aef56c5a3aafb61bcb8"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.749285 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://b9e0ecc3c247b6af19eb122bc74a94901ef917b6bb9d5aef56c5a3aafb61bcb8" gracePeriod=600 Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.764901 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ff756f59b-f8nvt"] Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.849712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-ovndb-tls-certs\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.849801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-combined-ca-bundle\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.849850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-httpd-config\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.849870 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbtt2\" (UniqueName: \"kubernetes.io/projected/e6613503-bc56-448f-aa4a-ef1e4003bfb2-kube-api-access-sbtt2\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.849889 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-config\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.854586 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.951730 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-ovndb-tls-certs\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.951866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-combined-ca-bundle\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.951920 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-httpd-config\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.951964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbtt2\" (UniqueName: \"kubernetes.io/projected/e6613503-bc56-448f-aa4a-ef1e4003bfb2-kube-api-access-sbtt2\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.952001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-config\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.959193 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-combined-ca-bundle\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.964698 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-config\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.979094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-httpd-config\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.991029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbtt2\" (UniqueName: \"kubernetes.io/projected/e6613503-bc56-448f-aa4a-ef1e4003bfb2-kube-api-access-sbtt2\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:31 crc kubenswrapper[4760]: I1125 08:28:31.998061 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-ovndb-tls-certs\") pod \"neutron-7ff756f59b-f8nvt\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:32 crc kubenswrapper[4760]: I1125 08:28:32.085877 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:32 crc kubenswrapper[4760]: I1125 08:28:32.379504 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="b9e0ecc3c247b6af19eb122bc74a94901ef917b6bb9d5aef56c5a3aafb61bcb8" exitCode=0 Nov 25 08:28:32 crc kubenswrapper[4760]: I1125 08:28:32.379577 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"b9e0ecc3c247b6af19eb122bc74a94901ef917b6bb9d5aef56c5a3aafb61bcb8"} Nov 25 08:28:32 crc kubenswrapper[4760]: I1125 08:28:32.380409 4760 scope.go:117] "RemoveContainer" containerID="1b1cf405379b8f080f8ca00a8aea4c263e37ea8900c6a162c41370800ee44d84" Nov 25 08:28:33 crc kubenswrapper[4760]: I1125 08:28:33.391230 4760 generic.go:334] "Generic (PLEG): container finished" podID="2bd46062-7573-4651-a59d-f32a136433b8" containerID="8b52c754a29627617f737cb3ed7b115a0a7494c96d3e266b2719d8e4dac85d8c" exitCode=0 Nov 25 08:28:33 crc kubenswrapper[4760]: I1125 08:28:33.391288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wrwr6" event={"ID":"2bd46062-7573-4651-a59d-f32a136433b8","Type":"ContainerDied","Data":"8b52c754a29627617f737cb3ed7b115a0a7494c96d3e266b2719d8e4dac85d8c"} Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.333388 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-564c475cd5-6wg66"] Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.335588 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.338291 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.338477 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.347802 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564c475cd5-6wg66"] Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.512006 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-config\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.512083 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-httpd-config\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.512203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-internal-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.512299 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-public-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.512393 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-ovndb-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.512496 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-combined-ca-bundle\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.512553 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l2s9\" (UniqueName: \"kubernetes.io/projected/9937626b-b050-469f-9e47-78785cfb5c15-kube-api-access-2l2s9\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.614747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-config\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.614808 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-httpd-config\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.614838 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-internal-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.614866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-public-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.614907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-ovndb-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.614949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-combined-ca-bundle\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.614980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l2s9\" (UniqueName: \"kubernetes.io/projected/9937626b-b050-469f-9e47-78785cfb5c15-kube-api-access-2l2s9\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.623556 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-combined-ca-bundle\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.632027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-ovndb-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.632139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-config\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.632859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-public-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.635750 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-httpd-config\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.637596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9937626b-b050-469f-9e47-78785cfb5c15-internal-tls-certs\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.638893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l2s9\" (UniqueName: \"kubernetes.io/projected/9937626b-b050-469f-9e47-78785cfb5c15-kube-api-access-2l2s9\") pod \"neutron-564c475cd5-6wg66\" (UID: \"9937626b-b050-469f-9e47-78785cfb5c15\") " pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:34 crc kubenswrapper[4760]: I1125 08:28:34.660801 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:35 crc kubenswrapper[4760]: I1125 08:28:35.444793 4760 generic.go:334] "Generic (PLEG): container finished" podID="62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" containerID="ef00cb0f2c9d7a1457a895997b1430dc50e50688832895c39ed22244c166088d" exitCode=0 Nov 25 08:28:35 crc kubenswrapper[4760]: I1125 08:28:35.444867 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhsz8" event={"ID":"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6","Type":"ContainerDied","Data":"ef00cb0f2c9d7a1457a895997b1430dc50e50688832895c39ed22244c166088d"} Nov 25 08:28:35 crc kubenswrapper[4760]: I1125 08:28:35.447655 4760 generic.go:334] "Generic (PLEG): container finished" podID="409e55ac-7906-4f67-ba89-f823a28796a5" containerID="bb61ac46168e741100342fbac117cf81c11118cea45d5591b125d12a72af1ccf" exitCode=0 Nov 25 08:28:35 crc kubenswrapper[4760]: I1125 08:28:35.447706 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2s8lr" event={"ID":"409e55ac-7906-4f67-ba89-f823a28796a5","Type":"ContainerDied","Data":"bb61ac46168e741100342fbac117cf81c11118cea45d5591b125d12a72af1ccf"} Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.089199 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wrwr6" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.264515 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dq67\" (UniqueName: \"kubernetes.io/projected/2bd46062-7573-4651-a59d-f32a136433b8-kube-api-access-2dq67\") pod \"2bd46062-7573-4651-a59d-f32a136433b8\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.264928 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-config-data\") pod \"2bd46062-7573-4651-a59d-f32a136433b8\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.265023 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-combined-ca-bundle\") pod \"2bd46062-7573-4651-a59d-f32a136433b8\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.265064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-scripts\") pod \"2bd46062-7573-4651-a59d-f32a136433b8\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.265177 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bd46062-7573-4651-a59d-f32a136433b8-logs\") pod \"2bd46062-7573-4651-a59d-f32a136433b8\" (UID: \"2bd46062-7573-4651-a59d-f32a136433b8\") " Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.266036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd46062-7573-4651-a59d-f32a136433b8-logs" (OuterVolumeSpecName: "logs") pod "2bd46062-7573-4651-a59d-f32a136433b8" (UID: "2bd46062-7573-4651-a59d-f32a136433b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.271022 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd46062-7573-4651-a59d-f32a136433b8-kube-api-access-2dq67" (OuterVolumeSpecName: "kube-api-access-2dq67") pod "2bd46062-7573-4651-a59d-f32a136433b8" (UID: "2bd46062-7573-4651-a59d-f32a136433b8"). InnerVolumeSpecName "kube-api-access-2dq67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.274361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-scripts" (OuterVolumeSpecName: "scripts") pod "2bd46062-7573-4651-a59d-f32a136433b8" (UID: "2bd46062-7573-4651-a59d-f32a136433b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.313498 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bd46062-7573-4651-a59d-f32a136433b8" (UID: "2bd46062-7573-4651-a59d-f32a136433b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.318837 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-config-data" (OuterVolumeSpecName: "config-data") pod "2bd46062-7573-4651-a59d-f32a136433b8" (UID: "2bd46062-7573-4651-a59d-f32a136433b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.367633 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.367691 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.367704 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bd46062-7573-4651-a59d-f32a136433b8-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.367719 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dq67\" (UniqueName: \"kubernetes.io/projected/2bd46062-7573-4651-a59d-f32a136433b8-kube-api-access-2dq67\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.367732 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd46062-7573-4651-a59d-f32a136433b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.392646 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6677d66f85-v9q29"] Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.474886 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"d0ea7124286527d9806dc0c775161bbfad1ddc74c136f4d8ca77bb8bd02e22cc"} Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.477206 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" event={"ID":"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df","Type":"ContainerStarted","Data":"79d783065f2044bf14b63145dc5c4c47e5bf7233c9c329dd0f396187b7f5cad7"} Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.479039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wrwr6" event={"ID":"2bd46062-7573-4651-a59d-f32a136433b8","Type":"ContainerDied","Data":"eb392836367417dcba74c833164b223c415f79198cb08f86cbdc9175eebaa6bb"} Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.479075 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb392836367417dcba74c833164b223c415f79198cb08f86cbdc9175eebaa6bb" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.479159 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wrwr6" Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.490568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerStarted","Data":"a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2"} Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.608751 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ff756f59b-f8nvt"] Nov 25 08:28:36 crc kubenswrapper[4760]: I1125 08:28:36.705626 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564c475cd5-6wg66"] Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.153100 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.183638 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.232610 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-598d8454cd-s4vpx"] Nov 25 08:28:37 crc kubenswrapper[4760]: E1125 08:28:37.233340 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409e55ac-7906-4f67-ba89-f823a28796a5" containerName="barbican-db-sync" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.234150 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="409e55ac-7906-4f67-ba89-f823a28796a5" containerName="barbican-db-sync" Nov 25 08:28:37 crc kubenswrapper[4760]: E1125 08:28:37.234262 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" containerName="keystone-bootstrap" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.234345 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" containerName="keystone-bootstrap" Nov 25 08:28:37 crc kubenswrapper[4760]: E1125 08:28:37.234434 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd46062-7573-4651-a59d-f32a136433b8" containerName="placement-db-sync" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.234533 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd46062-7573-4651-a59d-f32a136433b8" containerName="placement-db-sync" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.234791 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="409e55ac-7906-4f67-ba89-f823a28796a5" containerName="barbican-db-sync" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.234905 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" containerName="keystone-bootstrap" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.234977 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd46062-7573-4651-a59d-f32a136433b8" containerName="placement-db-sync" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.243326 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.248180 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.249212 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-598d8454cd-s4vpx"] Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.248924 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b98zq" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.249015 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.249062 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.249179 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.285283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-combined-ca-bundle\") pod \"409e55ac-7906-4f67-ba89-f823a28796a5\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.285475 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-config-data\") pod \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.285616 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-credential-keys\") pod \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.285756 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvhbb\" (UniqueName: \"kubernetes.io/projected/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-kube-api-access-xvhbb\") pod \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.285860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-fernet-keys\") pod \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.285943 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-db-sync-config-data\") pod \"409e55ac-7906-4f67-ba89-f823a28796a5\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.286023 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-combined-ca-bundle\") pod \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.286123 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-scripts\") pod \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\" (UID: \"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.286293 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68hmf\" (UniqueName: \"kubernetes.io/projected/409e55ac-7906-4f67-ba89-f823a28796a5-kube-api-access-68hmf\") pod \"409e55ac-7906-4f67-ba89-f823a28796a5\" (UID: \"409e55ac-7906-4f67-ba89-f823a28796a5\") " Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.295404 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "409e55ac-7906-4f67-ba89-f823a28796a5" (UID: "409e55ac-7906-4f67-ba89-f823a28796a5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.295743 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409e55ac-7906-4f67-ba89-f823a28796a5-kube-api-access-68hmf" (OuterVolumeSpecName: "kube-api-access-68hmf") pod "409e55ac-7906-4f67-ba89-f823a28796a5" (UID: "409e55ac-7906-4f67-ba89-f823a28796a5"). InnerVolumeSpecName "kube-api-access-68hmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.298379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-kube-api-access-xvhbb" (OuterVolumeSpecName: "kube-api-access-xvhbb") pod "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" (UID: "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6"). InnerVolumeSpecName "kube-api-access-xvhbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.316743 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-scripts" (OuterVolumeSpecName: "scripts") pod "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" (UID: "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.316988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" (UID: "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.320322 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" (UID: "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.330733 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" (UID: "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.336281 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-config-data" (OuterVolumeSpecName: "config-data") pod "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" (UID: "62eb64aa-dbc6-49d6-b8ab-8fffda94afa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.341457 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "409e55ac-7906-4f67-ba89-f823a28796a5" (UID: "409e55ac-7906-4f67-ba89-f823a28796a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.388206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-logs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.388341 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-config-data\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.388848 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-scripts\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.388897 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-public-tls-certs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389059 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-combined-ca-bundle\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389166 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-internal-tls-certs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389410 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2z4d\" (UniqueName: \"kubernetes.io/projected/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-kube-api-access-d2z4d\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389732 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389785 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389853 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389870 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389881 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68hmf\" (UniqueName: \"kubernetes.io/projected/409e55ac-7906-4f67-ba89-f823a28796a5-kube-api-access-68hmf\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389893 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/409e55ac-7906-4f67-ba89-f823a28796a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389940 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389952 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.389963 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvhbb\" (UniqueName: \"kubernetes.io/projected/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6-kube-api-access-xvhbb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.491457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-scripts\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.491521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-public-tls-certs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.491934 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-combined-ca-bundle\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.492066 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-internal-tls-certs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.492295 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2z4d\" (UniqueName: \"kubernetes.io/projected/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-kube-api-access-d2z4d\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.492478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-logs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.492564 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-config-data\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.493463 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-logs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.502277 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-internal-tls-certs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.507126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-config-data\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.516987 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ff756f59b-f8nvt" event={"ID":"e6613503-bc56-448f-aa4a-ef1e4003bfb2","Type":"ContainerStarted","Data":"ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf"} Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.517031 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ff756f59b-f8nvt" event={"ID":"e6613503-bc56-448f-aa4a-ef1e4003bfb2","Type":"ContainerStarted","Data":"6a1e81b6a71dc793fb7ef46d2991e548a60a0590ffcd6380f623f68e53d92369"} Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.519753 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hhsz8" event={"ID":"62eb64aa-dbc6-49d6-b8ab-8fffda94afa6","Type":"ContainerDied","Data":"d7e61e2a593a72bb9a37bbb88f4326ad951d15351447bc699a28b160b6ba9622"} Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.519796 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e61e2a593a72bb9a37bbb88f4326ad951d15351447bc699a28b160b6ba9622" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.519861 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hhsz8" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.522668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-public-tls-certs\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.522733 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-combined-ca-bundle\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.524854 4760 generic.go:334] "Generic (PLEG): container finished" podID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerID="ee75532f775aa5177aeb781f3e4e8146bbacd8f9cfb86b1ad416f1f274c25d34" exitCode=0 Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.524955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" event={"ID":"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df","Type":"ContainerDied","Data":"ee75532f775aa5177aeb781f3e4e8146bbacd8f9cfb86b1ad416f1f274c25d34"} Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.526457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-scripts\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.538405 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564c475cd5-6wg66" event={"ID":"9937626b-b050-469f-9e47-78785cfb5c15","Type":"ContainerStarted","Data":"ab9d0439e3cf08c3f673a7569df855355f4a418b0e107cfbb536f62c8e8353d5"} Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.540434 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564c475cd5-6wg66" event={"ID":"9937626b-b050-469f-9e47-78785cfb5c15","Type":"ContainerStarted","Data":"f53ea6f698444cae3ff6c79d42fecf58f705e7504fe7298c458bd3b176955936"} Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.538637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2z4d\" (UniqueName: \"kubernetes.io/projected/5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4-kube-api-access-d2z4d\") pod \"placement-598d8454cd-s4vpx\" (UID: \"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4\") " pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.549194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2s8lr" event={"ID":"409e55ac-7906-4f67-ba89-f823a28796a5","Type":"ContainerDied","Data":"9781b73ac409162eef28f79bea48b16e89fbb74f2854af4d34c450834bc04497"} Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.549349 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9781b73ac409162eef28f79bea48b16e89fbb74f2854af4d34c450834bc04497" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.549230 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2s8lr" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.579889 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.641226 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69cbccbbcc-v8kx4"] Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.662558 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.663263 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69cbccbbcc-v8kx4"] Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.665720 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.678819 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.679310 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sbjbt" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.681969 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.682237 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.682356 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5d9875665c-r8sg4"] Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.682430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.684114 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.687452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.689563 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x2fwx" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.689979 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740457 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-scripts\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740512 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zzr\" (UniqueName: \"kubernetes.io/projected/66326df4-af7d-474c-b63f-eee554099e1c-kube-api-access-f9zzr\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740540 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-public-tls-certs\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740559 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1b4f65-ed06-4d6d-9e74-b27255748225-logs\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-combined-ca-bundle\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdw6v\" (UniqueName: \"kubernetes.io/projected/2b1b4f65-ed06-4d6d-9e74-b27255748225-kube-api-access-tdw6v\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-config-data\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740666 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-fernet-keys\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-internal-tls-certs\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-credential-keys\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-combined-ca-bundle\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-config-data\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.740794 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-config-data-custom\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.778549 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d9875665c-r8sg4"] Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.838106 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b6b6b98f4-9l69x"] Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842461 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-combined-ca-bundle\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdw6v\" (UniqueName: \"kubernetes.io/projected/2b1b4f65-ed06-4d6d-9e74-b27255748225-kube-api-access-tdw6v\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-config-data\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842591 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-fernet-keys\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-internal-tls-certs\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842653 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-combined-ca-bundle\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-credential-keys\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842696 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-config-data\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-config-data-custom\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-scripts\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842825 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zzr\" (UniqueName: \"kubernetes.io/projected/66326df4-af7d-474c-b63f-eee554099e1c-kube-api-access-f9zzr\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842853 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-public-tls-certs\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.842878 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1b4f65-ed06-4d6d-9e74-b27255748225-logs\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.849325 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.859301 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.870348 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1b4f65-ed06-4d6d-9e74-b27255748225-logs\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.957366 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-config-data-custom\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.972316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdw6v\" (UniqueName: \"kubernetes.io/projected/2b1b4f65-ed06-4d6d-9e74-b27255748225-kube-api-access-tdw6v\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.979206 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-config-data\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.984448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-scripts\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:37 crc kubenswrapper[4760]: I1125 08:28:37.987686 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-credential-keys\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:37.998818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-internal-tls-certs\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.001916 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-fernet-keys\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.003532 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zzr\" (UniqueName: \"kubernetes.io/projected/66326df4-af7d-474c-b63f-eee554099e1c-kube-api-access-f9zzr\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.003897 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86ms\" (UniqueName: \"kubernetes.io/projected/5d7c9636-175f-4d7e-b3c7-86586c9a8734-kube-api-access-q86ms\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.003993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-config-data\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.004108 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-config-data-custom\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.004162 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.004189 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-combined-ca-bundle\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.004329 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9636-175f-4d7e-b3c7-86586c9a8734-logs\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.007572 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b6b6b98f4-9l69x"] Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.055634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-config-data\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.059550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1b4f65-ed06-4d6d-9e74-b27255748225-combined-ca-bundle\") pod \"barbican-worker-5d9875665c-r8sg4\" (UID: \"2b1b4f65-ed06-4d6d-9e74-b27255748225\") " pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.064041 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6677d66f85-v9q29"] Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.069529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66326df4-af7d-474c-b63f-eee554099e1c-public-tls-certs\") pod \"keystone-69cbccbbcc-v8kx4\" (UID: \"66326df4-af7d-474c-b63f-eee554099e1c\") " pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.092331 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-844b557b9c-qhcjl"] Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.094675 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.097832 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844b557b9c-qhcjl"] Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.106406 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q86ms\" (UniqueName: \"kubernetes.io/projected/5d7c9636-175f-4d7e-b3c7-86586c9a8734-kube-api-access-q86ms\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.106521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-config-data\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.106609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-config-data-custom\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.106636 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.106724 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9636-175f-4d7e-b3c7-86586c9a8734-logs\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.113638 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-config-data-custom\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.115588 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9636-175f-4d7e-b3c7-86586c9a8734-logs\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.129187 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64dc9dbb9b-7dhpt"] Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.136374 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.138108 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9636-175f-4d7e-b3c7-86586c9a8734-config-data\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.149492 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64dc9dbb9b-7dhpt"] Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.149628 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.152433 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.161773 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q86ms\" (UniqueName: \"kubernetes.io/projected/5d7c9636-175f-4d7e-b3c7-86586c9a8734-kube-api-access-q86ms\") pod \"barbican-keystone-listener-6b6b6b98f4-9l69x\" (UID: \"5d7c9636-175f-4d7e-b3c7-86586c9a8734\") " pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208122 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-dns-svc\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208184 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-combined-ca-bundle\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208215 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-logs\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5cv\" (UniqueName: \"kubernetes.io/projected/8499ed65-d46c-4e61-b113-06350f33838c-kube-api-access-nk5cv\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208386 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkm99\" (UniqueName: \"kubernetes.io/projected/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-kube-api-access-pkm99\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208429 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-config\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-nb\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208496 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-sb\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208590 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.208630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data-custom\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.226561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311447 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data-custom\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311539 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-dns-svc\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311611 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-combined-ca-bundle\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-logs\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5cv\" (UniqueName: \"kubernetes.io/projected/8499ed65-d46c-4e61-b113-06350f33838c-kube-api-access-nk5cv\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkm99\" (UniqueName: \"kubernetes.io/projected/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-kube-api-access-pkm99\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-config\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311781 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-nb\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.311805 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-sb\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.313089 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-sb\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.315131 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-dns-svc\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.316084 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-logs\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.321692 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.322239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-nb\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.322504 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data-custom\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.323215 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-combined-ca-bundle\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.331987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-config\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.341362 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9875665c-r8sg4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.342427 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5cv\" (UniqueName: \"kubernetes.io/projected/8499ed65-d46c-4e61-b113-06350f33838c-kube-api-access-nk5cv\") pod \"dnsmasq-dns-844b557b9c-qhcjl\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.345995 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.350663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkm99\" (UniqueName: \"kubernetes.io/projected/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-kube-api-access-pkm99\") pod \"barbican-api-64dc9dbb9b-7dhpt\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.530420 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-598d8454cd-s4vpx"] Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.566350 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.571323 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.576065 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-598d8454cd-s4vpx" event={"ID":"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4","Type":"ContainerStarted","Data":"e0bfb9e182375ec38d6927840b774aea26ebf06019fb5343a6c5d52d265157ac"} Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.587649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ff756f59b-f8nvt" event={"ID":"e6613503-bc56-448f-aa4a-ef1e4003bfb2","Type":"ContainerStarted","Data":"6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961"} Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.589289 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.639310 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" podUID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerName="dnsmasq-dns" containerID="cri-o://595c98829e17f74b3e25d178fdc3b320154ddeb3c07097587d1cd3a201e98a88" gracePeriod=10 Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.639442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" event={"ID":"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df","Type":"ContainerStarted","Data":"595c98829e17f74b3e25d178fdc3b320154ddeb3c07097587d1cd3a201e98a88"} Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.639838 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.677149 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564c475cd5-6wg66" event={"ID":"9937626b-b050-469f-9e47-78785cfb5c15","Type":"ContainerStarted","Data":"f6e0ea4c513cd5cd390ac47a8f288434505abc9317b7c253ca669a3f276ec5e2"} Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.678210 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.685774 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7ff756f59b-f8nvt" podStartSLOduration=7.685749078 podStartE2EDuration="7.685749078s" podCreationTimestamp="2025-11-25 08:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:38.623794829 +0000 UTC m=+1052.332825644" watchObservedRunningTime="2025-11-25 08:28:38.685749078 +0000 UTC m=+1052.394779873" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.687792 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" podStartSLOduration=7.687778625 podStartE2EDuration="7.687778625s" podCreationTimestamp="2025-11-25 08:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:38.673989122 +0000 UTC m=+1052.383019917" watchObservedRunningTime="2025-11-25 08:28:38.687778625 +0000 UTC m=+1052.396809420" Nov 25 08:28:38 crc kubenswrapper[4760]: I1125 08:28:38.714369 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-564c475cd5-6wg66" podStartSLOduration=4.714345892 podStartE2EDuration="4.714345892s" podCreationTimestamp="2025-11-25 08:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:38.708780388 +0000 UTC m=+1052.417811193" watchObservedRunningTime="2025-11-25 08:28:38.714345892 +0000 UTC m=+1052.423376687" Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.120927 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69cbccbbcc-v8kx4"] Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.217734 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b6b6b98f4-9l69x"] Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.260882 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d9875665c-r8sg4"] Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.373454 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844b557b9c-qhcjl"] Nov 25 08:28:39 crc kubenswrapper[4760]: W1125 08:28:39.497507 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499ed65_d46c_4e61_b113_06350f33838c.slice/crio-0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377 WatchSource:0}: Error finding container 0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377: Status 404 returned error can't find the container with id 0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377 Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.611882 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64dc9dbb9b-7dhpt"] Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.691083 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-598d8454cd-s4vpx" event={"ID":"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4","Type":"ContainerStarted","Data":"5615b71035a4523c5780ffd180d76ee499abba229e6321ea8033d3bdcdf2fe5b"} Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.704138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" event={"ID":"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d","Type":"ContainerStarted","Data":"f286339edb7c86f8ff121a1e8d0aa40bdc4bb7f9fc50589f8c5ef1de740827eb"} Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.723494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69cbccbbcc-v8kx4" event={"ID":"66326df4-af7d-474c-b63f-eee554099e1c","Type":"ContainerStarted","Data":"215beb8ae696f8b59b9138c0e0556bb500bc8067fcc9b748ca0bf1a7d7a1392a"} Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.723538 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69cbccbbcc-v8kx4" event={"ID":"66326df4-af7d-474c-b63f-eee554099e1c","Type":"ContainerStarted","Data":"65feba0b41d5d7e947215e9ff6ff3a03ac302bf3562b45114afe43c983e62a52"} Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.723885 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.749216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" event={"ID":"8499ed65-d46c-4e61-b113-06350f33838c","Type":"ContainerStarted","Data":"0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377"} Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.771319 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69cbccbbcc-v8kx4" podStartSLOduration=2.77130005 podStartE2EDuration="2.77130005s" podCreationTimestamp="2025-11-25 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:39.769284534 +0000 UTC m=+1053.478315349" watchObservedRunningTime="2025-11-25 08:28:39.77130005 +0000 UTC m=+1053.480330855" Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.788613 4760 generic.go:334] "Generic (PLEG): container finished" podID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerID="595c98829e17f74b3e25d178fdc3b320154ddeb3c07097587d1cd3a201e98a88" exitCode=0 Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.788710 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" event={"ID":"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df","Type":"ContainerDied","Data":"595c98829e17f74b3e25d178fdc3b320154ddeb3c07097587d1cd3a201e98a88"} Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.790782 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" event={"ID":"5d7c9636-175f-4d7e-b3c7-86586c9a8734","Type":"ContainerStarted","Data":"45690ee912e257261b57d04e7f3a1bd251761fd27e9515ba1208dd6961f753fc"} Nov 25 08:28:39 crc kubenswrapper[4760]: I1125 08:28:39.797985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9875665c-r8sg4" event={"ID":"2b1b4f65-ed06-4d6d-9e74-b27255748225","Type":"ContainerStarted","Data":"5f36dbd7b9ff17934323aa40ac8cfa88506160c8de4e00d8cb71a2a62f9ecf96"} Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.096889 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.266687 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-config\") pod \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.266815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-dns-svc\") pod \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.266935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-sb\") pod \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.267064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brj7p\" (UniqueName: \"kubernetes.io/projected/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-kube-api-access-brj7p\") pod \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.267096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-nb\") pod \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\" (UID: \"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df\") " Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.274119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-kube-api-access-brj7p" (OuterVolumeSpecName: "kube-api-access-brj7p") pod "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" (UID: "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df"). InnerVolumeSpecName "kube-api-access-brj7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.348237 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-config" (OuterVolumeSpecName: "config") pod "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" (UID: "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.374754 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brj7p\" (UniqueName: \"kubernetes.io/projected/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-kube-api-access-brj7p\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.374788 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.384730 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" (UID: "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.387826 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" (UID: "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.390156 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" (UID: "a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.477071 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.477525 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.477540 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.809931 4760 generic.go:334] "Generic (PLEG): container finished" podID="8499ed65-d46c-4e61-b113-06350f33838c" containerID="825259b322bdf7c811e153002b4235f40936303b06472afe3162f71f7da5b6b9" exitCode=0 Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.810006 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" event={"ID":"8499ed65-d46c-4e61-b113-06350f33838c","Type":"ContainerDied","Data":"825259b322bdf7c811e153002b4235f40936303b06472afe3162f71f7da5b6b9"} Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.813672 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" event={"ID":"a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df","Type":"ContainerDied","Data":"79d783065f2044bf14b63145dc5c4c47e5bf7233c9c329dd0f396187b7f5cad7"} Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.813735 4760 scope.go:117] "RemoveContainer" containerID="595c98829e17f74b3e25d178fdc3b320154ddeb3c07097587d1cd3a201e98a88" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.813734 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6677d66f85-v9q29" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.820411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-598d8454cd-s4vpx" event={"ID":"5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4","Type":"ContainerStarted","Data":"b8749f4601323296aa6edf1e581c5556372614b1b8abda9f27652b4afad9e6bc"} Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.821276 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.821400 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.824076 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" event={"ID":"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d","Type":"ContainerStarted","Data":"d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc"} Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.824186 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.824411 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.824428 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" event={"ID":"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d","Type":"ContainerStarted","Data":"47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f"} Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.850450 4760 scope.go:117] "RemoveContainer" containerID="ee75532f775aa5177aeb781f3e4e8146bbacd8f9cfb86b1ad416f1f274c25d34" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.881452 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-598d8454cd-s4vpx" podStartSLOduration=3.881405674 podStartE2EDuration="3.881405674s" podCreationTimestamp="2025-11-25 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:40.870015328 +0000 UTC m=+1054.579046143" watchObservedRunningTime="2025-11-25 08:28:40.881405674 +0000 UTC m=+1054.590436489" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.904731 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" podStartSLOduration=2.90470489 podStartE2EDuration="2.90470489s" podCreationTimestamp="2025-11-25 08:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:40.890234109 +0000 UTC m=+1054.599264914" watchObservedRunningTime="2025-11-25 08:28:40.90470489 +0000 UTC m=+1054.613735685" Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.982981 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6677d66f85-v9q29"] Nov 25 08:28:40 crc kubenswrapper[4760]: I1125 08:28:40.989295 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6677d66f85-v9q29"] Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.260082 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d84fc8b6b-jxtfg"] Nov 25 08:28:41 crc kubenswrapper[4760]: E1125 08:28:41.260781 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerName="init" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.260800 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerName="init" Nov 25 08:28:41 crc kubenswrapper[4760]: E1125 08:28:41.260853 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerName="dnsmasq-dns" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.260861 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerName="dnsmasq-dns" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.261060 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" containerName="dnsmasq-dns" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.261971 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.273542 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.273621 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d84fc8b6b-jxtfg"] Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.273826 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.293672 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-combined-ca-bundle\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.293750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-internal-tls-certs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.293774 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-public-tls-certs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.293794 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgm25\" (UniqueName: \"kubernetes.io/projected/d99a8e14-f31b-45d8-8e74-8ace724974ad-kube-api-access-dgm25\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.293821 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-config-data-custom\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.293852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d99a8e14-f31b-45d8-8e74-8ace724974ad-logs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.294092 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-config-data\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.395771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-config-data\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.395923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-combined-ca-bundle\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.395995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-internal-tls-certs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.396021 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-public-tls-certs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.396053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgm25\" (UniqueName: \"kubernetes.io/projected/d99a8e14-f31b-45d8-8e74-8ace724974ad-kube-api-access-dgm25\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.396083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-config-data-custom\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.396116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d99a8e14-f31b-45d8-8e74-8ace724974ad-logs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.396941 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d99a8e14-f31b-45d8-8e74-8ace724974ad-logs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.404003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-internal-tls-certs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.405457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-config-data\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.406047 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-combined-ca-bundle\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.409617 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-public-tls-certs\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.415925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d99a8e14-f31b-45d8-8e74-8ace724974ad-config-data-custom\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.419393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgm25\" (UniqueName: \"kubernetes.io/projected/d99a8e14-f31b-45d8-8e74-8ace724974ad-kube-api-access-dgm25\") pod \"barbican-api-6d84fc8b6b-jxtfg\" (UID: \"d99a8e14-f31b-45d8-8e74-8ace724974ad\") " pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.551616 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b7dd9bf58-zdxgq" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.584445 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.675003 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6655684d54-8jfvz" podUID="0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.847340 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" event={"ID":"8499ed65-d46c-4e61-b113-06350f33838c","Type":"ContainerStarted","Data":"d8f23495c8b054fd6d85cdbf7fabc899422278942bc52c0d6cc7d1e6c30a9404"} Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.847516 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:41 crc kubenswrapper[4760]: I1125 08:28:41.881835 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" podStartSLOduration=4.881819163 podStartE2EDuration="4.881819163s" podCreationTimestamp="2025-11-25 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:41.880595909 +0000 UTC m=+1055.589626714" watchObservedRunningTime="2025-11-25 08:28:41.881819163 +0000 UTC m=+1055.590849948" Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.377793 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d84fc8b6b-jxtfg"] Nov 25 08:28:42 crc kubenswrapper[4760]: W1125 08:28:42.438319 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd99a8e14_f31b_45d8_8e74_8ace724974ad.slice/crio-f22d31455233bbf36db3e62779787f2918e220296b35e31755be7ff9a62124c8 WatchSource:0}: Error finding container f22d31455233bbf36db3e62779787f2918e220296b35e31755be7ff9a62124c8: Status 404 returned error can't find the container with id f22d31455233bbf36db3e62779787f2918e220296b35e31755be7ff9a62124c8 Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.866082 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9875665c-r8sg4" event={"ID":"2b1b4f65-ed06-4d6d-9e74-b27255748225","Type":"ContainerStarted","Data":"c9fa83efe7e789d1cef9466fccb21797ca6afee86b0be8ace99f91c9d34ce3f3"} Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.866758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9875665c-r8sg4" event={"ID":"2b1b4f65-ed06-4d6d-9e74-b27255748225","Type":"ContainerStarted","Data":"f7ae5941f48a5e809af6fd6a9d25c51e762a1d36e1f54516d3bfc6d14e2efe1b"} Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.870871 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" event={"ID":"d99a8e14-f31b-45d8-8e74-8ace724974ad","Type":"ContainerStarted","Data":"4c6ef9703ec0933efe905f03193accaa7a789b82a9654fbe800f9acaa54c3006"} Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.870913 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" event={"ID":"d99a8e14-f31b-45d8-8e74-8ace724974ad","Type":"ContainerStarted","Data":"f22d31455233bbf36db3e62779787f2918e220296b35e31755be7ff9a62124c8"} Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.873503 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" event={"ID":"5d7c9636-175f-4d7e-b3c7-86586c9a8734","Type":"ContainerStarted","Data":"ed477b02964f6eb80839996419e6a6de801d93a6991eef506340681d2a4f1a40"} Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.873526 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" event={"ID":"5d7c9636-175f-4d7e-b3c7-86586c9a8734","Type":"ContainerStarted","Data":"e6abf40f9ad38f0d21af2f1ccf0fa75eedf095fa4fc460f4662a78096f317197"} Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.882628 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5d9875665c-r8sg4" podStartSLOduration=3.519661552 podStartE2EDuration="5.882611632s" podCreationTimestamp="2025-11-25 08:28:37 +0000 UTC" firstStartedPulling="2025-11-25 08:28:39.498437496 +0000 UTC m=+1053.207468291" lastFinishedPulling="2025-11-25 08:28:41.861387576 +0000 UTC m=+1055.570418371" observedRunningTime="2025-11-25 08:28:42.881777399 +0000 UTC m=+1056.590808194" watchObservedRunningTime="2025-11-25 08:28:42.882611632 +0000 UTC m=+1056.591642427" Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.927575 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b6b6b98f4-9l69x" podStartSLOduration=3.243588899 podStartE2EDuration="5.927541349s" podCreationTimestamp="2025-11-25 08:28:37 +0000 UTC" firstStartedPulling="2025-11-25 08:28:39.226690723 +0000 UTC m=+1052.935721528" lastFinishedPulling="2025-11-25 08:28:41.910643173 +0000 UTC m=+1055.619673978" observedRunningTime="2025-11-25 08:28:42.924002801 +0000 UTC m=+1056.633033606" watchObservedRunningTime="2025-11-25 08:28:42.927541349 +0000 UTC m=+1056.636572144" Nov 25 08:28:42 crc kubenswrapper[4760]: I1125 08:28:42.974018 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df" path="/var/lib/kubelet/pods/a5a8a8b4-cfc8-41e5-9436-a45e5e51f8df/volumes" Nov 25 08:28:43 crc kubenswrapper[4760]: I1125 08:28:43.892618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pk2zm" event={"ID":"99920db5-d382-4159-a705-53428f8a61a8","Type":"ContainerStarted","Data":"eefb30894d640e7c8009088b0ec1b6f61c9f9b96d25fb9f785cf880b97d2c7f5"} Nov 25 08:28:43 crc kubenswrapper[4760]: I1125 08:28:43.899085 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" event={"ID":"d99a8e14-f31b-45d8-8e74-8ace724974ad","Type":"ContainerStarted","Data":"b4a95c88ff5464bc538ac51b0388940ee7314edc848defe5a1e4f22979c64ee8"} Nov 25 08:28:43 crc kubenswrapper[4760]: I1125 08:28:43.899927 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:43 crc kubenswrapper[4760]: I1125 08:28:43.899961 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:43 crc kubenswrapper[4760]: I1125 08:28:43.912043 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pk2zm" podStartSLOduration=3.946666143 podStartE2EDuration="51.912023396s" podCreationTimestamp="2025-11-25 08:27:52 +0000 UTC" firstStartedPulling="2025-11-25 08:27:53.944717294 +0000 UTC m=+1007.653748089" lastFinishedPulling="2025-11-25 08:28:41.910074547 +0000 UTC m=+1055.619105342" observedRunningTime="2025-11-25 08:28:43.911518462 +0000 UTC m=+1057.620549267" watchObservedRunningTime="2025-11-25 08:28:43.912023396 +0000 UTC m=+1057.621054191" Nov 25 08:28:43 crc kubenswrapper[4760]: I1125 08:28:43.937979 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" podStartSLOduration=2.937956896 podStartE2EDuration="2.937956896s" podCreationTimestamp="2025-11-25 08:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:43.934827709 +0000 UTC m=+1057.643858504" watchObservedRunningTime="2025-11-25 08:28:43.937956896 +0000 UTC m=+1057.646987691" Nov 25 08:28:48 crc kubenswrapper[4760]: I1125 08:28:48.568681 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:28:48 crc kubenswrapper[4760]: I1125 08:28:48.668338 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f4bdbdb7-52nlh"] Nov 25 08:28:48 crc kubenswrapper[4760]: I1125 08:28:48.668602 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" podUID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerName="dnsmasq-dns" containerID="cri-o://5bc1b535d6f0fea6baabe1bb6de1d66f7b0b43ff47795d0a51c97d1b393af140" gracePeriod=10 Nov 25 08:28:48 crc kubenswrapper[4760]: I1125 08:28:48.976673 4760 generic.go:334] "Generic (PLEG): container finished" podID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerID="5bc1b535d6f0fea6baabe1bb6de1d66f7b0b43ff47795d0a51c97d1b393af140" exitCode=0 Nov 25 08:28:48 crc kubenswrapper[4760]: I1125 08:28:48.977028 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" event={"ID":"21b73fe9-d0be-4c1f-bb9d-567ac13113c8","Type":"ContainerDied","Data":"5bc1b535d6f0fea6baabe1bb6de1d66f7b0b43ff47795d0a51c97d1b393af140"} Nov 25 08:28:49 crc kubenswrapper[4760]: I1125 08:28:49.619303 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.588548 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.632364 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.750978 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.783640 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-dns-svc\") pod \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.784060 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-nb\") pod \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.784103 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmcx5\" (UniqueName: \"kubernetes.io/projected/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-kube-api-access-mmcx5\") pod \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.784173 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-config\") pod \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.784223 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-sb\") pod \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\" (UID: \"21b73fe9-d0be-4c1f-bb9d-567ac13113c8\") " Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.817513 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-kube-api-access-mmcx5" (OuterVolumeSpecName: "kube-api-access-mmcx5") pod "21b73fe9-d0be-4c1f-bb9d-567ac13113c8" (UID: "21b73fe9-d0be-4c1f-bb9d-567ac13113c8"). InnerVolumeSpecName "kube-api-access-mmcx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.888183 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmcx5\" (UniqueName: \"kubernetes.io/projected/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-kube-api-access-mmcx5\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.912983 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21b73fe9-d0be-4c1f-bb9d-567ac13113c8" (UID: "21b73fe9-d0be-4c1f-bb9d-567ac13113c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.921767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21b73fe9-d0be-4c1f-bb9d-567ac13113c8" (UID: "21b73fe9-d0be-4c1f-bb9d-567ac13113c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.981476 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-config" (OuterVolumeSpecName: "config") pod "21b73fe9-d0be-4c1f-bb9d-567ac13113c8" (UID: "21b73fe9-d0be-4c1f-bb9d-567ac13113c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.993388 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.993445 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:50 crc kubenswrapper[4760]: I1125 08:28:50.993461 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.010633 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.013971 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21b73fe9-d0be-4c1f-bb9d-567ac13113c8" (UID: "21b73fe9-d0be-4c1f-bb9d-567ac13113c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.072757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4bdbdb7-52nlh" event={"ID":"21b73fe9-d0be-4c1f-bb9d-567ac13113c8","Type":"ContainerDied","Data":"595c7ce3257ca2d71044835affabe1af0affd7d1f40d67da6dad12b208b812d5"} Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.072820 4760 scope.go:117] "RemoveContainer" containerID="5bc1b535d6f0fea6baabe1bb6de1d66f7b0b43ff47795d0a51c97d1b393af140" Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.098213 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21b73fe9-d0be-4c1f-bb9d-567ac13113c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.124437 4760 scope.go:117] "RemoveContainer" containerID="8c759af517bb41499f996791849ca3fb24b9b1dd20902c3b38793e0a6e3060e3" Nov 25 08:28:51 crc kubenswrapper[4760]: E1125 08:28:51.175852 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.355019 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f4bdbdb7-52nlh"] Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.365237 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66f4bdbdb7-52nlh"] Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.549891 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b7dd9bf58-zdxgq" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 08:28:51 crc kubenswrapper[4760]: I1125 08:28:51.670932 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6655684d54-8jfvz" podUID="0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 25 08:28:52 crc kubenswrapper[4760]: I1125 08:28:52.021222 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerStarted","Data":"1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d"} Nov 25 08:28:52 crc kubenswrapper[4760]: I1125 08:28:52.021346 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:28:52 crc kubenswrapper[4760]: I1125 08:28:52.021361 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="ceilometer-notification-agent" containerID="cri-o://063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730" gracePeriod=30 Nov 25 08:28:52 crc kubenswrapper[4760]: I1125 08:28:52.021432 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="proxy-httpd" containerID="cri-o://1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d" gracePeriod=30 Nov 25 08:28:52 crc kubenswrapper[4760]: I1125 08:28:52.021445 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="sg-core" containerID="cri-o://a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2" gracePeriod=30 Nov 25 08:28:52 crc kubenswrapper[4760]: I1125 08:28:52.980594 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" path="/var/lib/kubelet/pods/21b73fe9-d0be-4c1f-bb9d-567ac13113c8/volumes" Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.039091 4760 generic.go:334] "Generic (PLEG): container finished" podID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerID="1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d" exitCode=0 Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.039119 4760 generic.go:334] "Generic (PLEG): container finished" podID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerID="a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2" exitCode=2 Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.039163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerDied","Data":"1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d"} Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.039200 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerDied","Data":"a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2"} Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.040981 4760 generic.go:334] "Generic (PLEG): container finished" podID="99920db5-d382-4159-a705-53428f8a61a8" containerID="eefb30894d640e7c8009088b0ec1b6f61c9f9b96d25fb9f785cf880b97d2c7f5" exitCode=0 Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.041019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pk2zm" event={"ID":"99920db5-d382-4159-a705-53428f8a61a8","Type":"ContainerDied","Data":"eefb30894d640e7c8009088b0ec1b6f61c9f9b96d25fb9f785cf880b97d2c7f5"} Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.269375 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.327809 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d84fc8b6b-jxtfg" Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.398471 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64dc9dbb9b-7dhpt"] Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.398745 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api-log" containerID="cri-o://47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f" gracePeriod=30 Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.398789 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api" containerID="cri-o://d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc" gracePeriod=30 Nov 25 08:28:53 crc kubenswrapper[4760]: I1125 08:28:53.977445 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.066920 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-combined-ca-bundle\") pod \"15e555d8-60bd-48d7-bb21-04133ffa1042\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.067023 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-scripts\") pod \"15e555d8-60bd-48d7-bb21-04133ffa1042\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.067065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-config-data\") pod \"15e555d8-60bd-48d7-bb21-04133ffa1042\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.067102 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-sg-core-conf-yaml\") pod \"15e555d8-60bd-48d7-bb21-04133ffa1042\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.067142 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-log-httpd\") pod \"15e555d8-60bd-48d7-bb21-04133ffa1042\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.067203 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-run-httpd\") pod \"15e555d8-60bd-48d7-bb21-04133ffa1042\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.067333 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2rds\" (UniqueName: \"kubernetes.io/projected/15e555d8-60bd-48d7-bb21-04133ffa1042-kube-api-access-p2rds\") pod \"15e555d8-60bd-48d7-bb21-04133ffa1042\" (UID: \"15e555d8-60bd-48d7-bb21-04133ffa1042\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.069020 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15e555d8-60bd-48d7-bb21-04133ffa1042" (UID: "15e555d8-60bd-48d7-bb21-04133ffa1042"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.069287 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15e555d8-60bd-48d7-bb21-04133ffa1042" (UID: "15e555d8-60bd-48d7-bb21-04133ffa1042"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.075451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-scripts" (OuterVolumeSpecName: "scripts") pod "15e555d8-60bd-48d7-bb21-04133ffa1042" (UID: "15e555d8-60bd-48d7-bb21-04133ffa1042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.077930 4760 generic.go:334] "Generic (PLEG): container finished" podID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerID="063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730" exitCode=0 Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.078026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerDied","Data":"063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730"} Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.078054 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15e555d8-60bd-48d7-bb21-04133ffa1042","Type":"ContainerDied","Data":"23575b78c5447d02ba668ca021bc202e5676e619817d91f3ee5253d7c3c9b8fa"} Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.078073 4760 scope.go:117] "RemoveContainer" containerID="1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.078216 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.088360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e555d8-60bd-48d7-bb21-04133ffa1042-kube-api-access-p2rds" (OuterVolumeSpecName: "kube-api-access-p2rds") pod "15e555d8-60bd-48d7-bb21-04133ffa1042" (UID: "15e555d8-60bd-48d7-bb21-04133ffa1042"). InnerVolumeSpecName "kube-api-access-p2rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.089536 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerID="47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f" exitCode=143 Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.090395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" event={"ID":"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d","Type":"ContainerDied","Data":"47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f"} Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.114407 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15e555d8-60bd-48d7-bb21-04133ffa1042" (UID: "15e555d8-60bd-48d7-bb21-04133ffa1042"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.145044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15e555d8-60bd-48d7-bb21-04133ffa1042" (UID: "15e555d8-60bd-48d7-bb21-04133ffa1042"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.169486 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.169526 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.169537 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15e555d8-60bd-48d7-bb21-04133ffa1042-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.169547 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2rds\" (UniqueName: \"kubernetes.io/projected/15e555d8-60bd-48d7-bb21-04133ffa1042-kube-api-access-p2rds\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.169559 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.169567 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.177400 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-config-data" (OuterVolumeSpecName: "config-data") pod "15e555d8-60bd-48d7-bb21-04133ffa1042" (UID: "15e555d8-60bd-48d7-bb21-04133ffa1042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.187103 4760 scope.go:117] "RemoveContainer" containerID="a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.240460 4760 scope.go:117] "RemoveContainer" containerID="063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.272652 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e555d8-60bd-48d7-bb21-04133ffa1042-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.284958 4760 scope.go:117] "RemoveContainer" containerID="1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.285705 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d\": container with ID starting with 1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d not found: ID does not exist" containerID="1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.285768 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d"} err="failed to get container status \"1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d\": rpc error: code = NotFound desc = could not find container \"1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d\": container with ID starting with 1e52a7781a20d56d2d696c893eac56010a2aba71b9a3840334975569f0d88f5d not found: ID does not exist" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.285803 4760 scope.go:117] "RemoveContainer" containerID="a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.286535 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2\": container with ID starting with a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2 not found: ID does not exist" containerID="a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.286835 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2"} err="failed to get container status \"a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2\": rpc error: code = NotFound desc = could not find container \"a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2\": container with ID starting with a813e2b89e19387ceff1332d495ad831a10a4b74e345398dc872d5c997184da2 not found: ID does not exist" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.286861 4760 scope.go:117] "RemoveContainer" containerID="063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.287371 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730\": container with ID starting with 063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730 not found: ID does not exist" containerID="063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.287401 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730"} err="failed to get container status \"063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730\": rpc error: code = NotFound desc = could not find container \"063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730\": container with ID starting with 063f1735256640d68e1913818d589948d7c562af91938278c3f12597fc43b730 not found: ID does not exist" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.404414 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.467474 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.475220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9gqv\" (UniqueName: \"kubernetes.io/projected/99920db5-d382-4159-a705-53428f8a61a8-kube-api-access-h9gqv\") pod \"99920db5-d382-4159-a705-53428f8a61a8\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.475291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99920db5-d382-4159-a705-53428f8a61a8-etc-machine-id\") pod \"99920db5-d382-4159-a705-53428f8a61a8\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.475316 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-scripts\") pod \"99920db5-d382-4159-a705-53428f8a61a8\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.475425 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-combined-ca-bundle\") pod \"99920db5-d382-4159-a705-53428f8a61a8\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.475469 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-db-sync-config-data\") pod \"99920db5-d382-4159-a705-53428f8a61a8\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.475511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-config-data\") pod \"99920db5-d382-4159-a705-53428f8a61a8\" (UID: \"99920db5-d382-4159-a705-53428f8a61a8\") " Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.484735 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99920db5-d382-4159-a705-53428f8a61a8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "99920db5-d382-4159-a705-53428f8a61a8" (UID: "99920db5-d382-4159-a705-53428f8a61a8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.486404 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-scripts" (OuterVolumeSpecName: "scripts") pod "99920db5-d382-4159-a705-53428f8a61a8" (UID: "99920db5-d382-4159-a705-53428f8a61a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.521547 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99920db5-d382-4159-a705-53428f8a61a8-kube-api-access-h9gqv" (OuterVolumeSpecName: "kube-api-access-h9gqv") pod "99920db5-d382-4159-a705-53428f8a61a8" (UID: "99920db5-d382-4159-a705-53428f8a61a8"). InnerVolumeSpecName "kube-api-access-h9gqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.521674 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "99920db5-d382-4159-a705-53428f8a61a8" (UID: "99920db5-d382-4159-a705-53428f8a61a8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.523383 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.525196 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99920db5-d382-4159-a705-53428f8a61a8" (UID: "99920db5-d382-4159-a705-53428f8a61a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.561638 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.562772 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerName="dnsmasq-dns" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.562872 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerName="dnsmasq-dns" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.562951 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="ceilometer-notification-agent" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.562972 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="ceilometer-notification-agent" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.563085 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="sg-core" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.563107 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="sg-core" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.563192 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="proxy-httpd" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.563203 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="proxy-httpd" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.563345 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99920db5-d382-4159-a705-53428f8a61a8" containerName="cinder-db-sync" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.563362 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="99920db5-d382-4159-a705-53428f8a61a8" containerName="cinder-db-sync" Nov 25 08:28:54 crc kubenswrapper[4760]: E1125 08:28:54.563416 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerName="init" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.563431 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerName="init" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.565372 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b73fe9-d0be-4c1f-bb9d-567ac13113c8" containerName="dnsmasq-dns" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.565437 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="ceilometer-notification-agent" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.565457 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="99920db5-d382-4159-a705-53428f8a61a8" containerName="cinder-db-sync" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.565484 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="sg-core" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.565516 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" containerName="proxy-httpd" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.568376 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-config-data" (OuterVolumeSpecName: "config-data") pod "99920db5-d382-4159-a705-53428f8a61a8" (UID: "99920db5-d382-4159-a705-53428f8a61a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.578716 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.578920 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.578973 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.578983 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.578993 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9gqv\" (UniqueName: \"kubernetes.io/projected/99920db5-d382-4159-a705-53428f8a61a8-kube-api-access-h9gqv\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.579005 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99920db5-d382-4159-a705-53428f8a61a8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.579018 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99920db5-d382-4159-a705-53428f8a61a8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.584184 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.584929 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.618316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.680938 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.680999 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.681058 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-config-data\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.681082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-log-httpd\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.681107 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8vx\" (UniqueName: \"kubernetes.io/projected/30ead1cc-7ac6-4208-ba63-d5e41160e015-kube-api-access-qw8vx\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.681121 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-run-httpd\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.681137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-scripts\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.782264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-config-data\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.782332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-log-httpd\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.782374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8vx\" (UniqueName: \"kubernetes.io/projected/30ead1cc-7ac6-4208-ba63-d5e41160e015-kube-api-access-qw8vx\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.782397 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-run-httpd\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.782419 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-scripts\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.782495 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.782547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.785358 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-log-httpd\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.785960 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-run-httpd\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.786537 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.787586 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-config-data\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.788682 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.790391 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-scripts\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.800786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8vx\" (UniqueName: \"kubernetes.io/projected/30ead1cc-7ac6-4208-ba63-d5e41160e015-kube-api-access-qw8vx\") pod \"ceilometer-0\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " pod="openstack/ceilometer-0" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.949932 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e555d8-60bd-48d7-bb21-04133ffa1042" path="/var/lib/kubelet/pods/15e555d8-60bd-48d7-bb21-04133ffa1042/volumes" Nov 25 08:28:54 crc kubenswrapper[4760]: I1125 08:28:54.974336 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.104323 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pk2zm" event={"ID":"99920db5-d382-4159-a705-53428f8a61a8","Type":"ContainerDied","Data":"76d97d042d89fc3f957872bdc835ac6dc8c7c3290d9694dc62439f9994e6ab4d"} Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.104634 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d97d042d89fc3f957872bdc835ac6dc8c7c3290d9694dc62439f9994e6ab4d" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.104743 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pk2zm" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.244285 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:28:55 crc kubenswrapper[4760]: W1125 08:28:55.249796 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30ead1cc_7ac6_4208_ba63_d5e41160e015.slice/crio-0a887ef43879097646e4d0faf174058c0ee151133d55aa5ccc00265dc2e19d86 WatchSource:0}: Error finding container 0a887ef43879097646e4d0faf174058c0ee151133d55aa5ccc00265dc2e19d86: Status 404 returned error can't find the container with id 0a887ef43879097646e4d0faf174058c0ee151133d55aa5ccc00265dc2e19d86 Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.433965 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.436335 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.440829 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.444579 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.444922 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qvh9g" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.445068 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.450060 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-775457b975-8dft4"] Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.452016 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.474060 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-775457b975-8dft4"] Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.503102 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505169 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtwx2\" (UniqueName: \"kubernetes.io/projected/67b9bd30-6c4c-490c-a378-54c0ad55c528-kube-api-access-gtwx2\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505234 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-scripts\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505287 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-config\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505592 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-dns-svc\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-sb\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67b9bd30-6c4c-490c-a378-54c0ad55c528-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-nb\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.505919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l729g\" (UniqueName: \"kubernetes.io/projected/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-kube-api-access-l729g\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.609106 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtwx2\" (UniqueName: \"kubernetes.io/projected/67b9bd30-6c4c-490c-a378-54c0ad55c528-kube-api-access-gtwx2\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615663 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-scripts\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-config\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615804 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615879 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-dns-svc\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.615928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-sb\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.616021 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67b9bd30-6c4c-490c-a378-54c0ad55c528-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.616054 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-nb\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.616083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l729g\" (UniqueName: \"kubernetes.io/projected/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-kube-api-access-l729g\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.616880 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.618989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67b9bd30-6c4c-490c-a378-54c0ad55c528-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.619708 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-sb\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.619962 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.620326 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-config\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.620356 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-nb\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.624394 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-dns-svc\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.627435 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.627743 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-scripts\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.635566 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.636018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.640204 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.640507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtwx2\" (UniqueName: \"kubernetes.io/projected/67b9bd30-6c4c-490c-a378-54c0ad55c528-kube-api-access-gtwx2\") pod \"cinder-scheduler-0\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.648461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l729g\" (UniqueName: \"kubernetes.io/projected/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-kube-api-access-l729g\") pod \"dnsmasq-dns-775457b975-8dft4\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.718747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-logs\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.718858 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.718884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.718923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.718950 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-scripts\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.718972 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.719000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpgr\" (UniqueName: \"kubernetes.io/projected/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-kube-api-access-fdpgr\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.772278 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.803711 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.821210 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-logs\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.821507 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.821533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.821573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.821594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-scripts\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.821611 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.821632 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdpgr\" (UniqueName: \"kubernetes.io/projected/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-kube-api-access-fdpgr\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.822327 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-logs\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.822393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.830371 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.832922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.834900 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.838805 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-scripts\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:55 crc kubenswrapper[4760]: I1125 08:28:55.873904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdpgr\" (UniqueName: \"kubernetes.io/projected/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-kube-api-access-fdpgr\") pod \"cinder-api-0\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " pod="openstack/cinder-api-0" Nov 25 08:28:56 crc kubenswrapper[4760]: I1125 08:28:56.059709 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 08:28:56 crc kubenswrapper[4760]: I1125 08:28:56.148811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerStarted","Data":"0a887ef43879097646e4d0faf174058c0ee151133d55aa5ccc00265dc2e19d86"} Nov 25 08:28:56 crc kubenswrapper[4760]: I1125 08:28:56.441712 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-775457b975-8dft4"] Nov 25 08:28:56 crc kubenswrapper[4760]: W1125 08:28:56.458607 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f5081a_73e1_49f7_ac0a_d42c5271b6ba.slice/crio-d6b502d9f0f9b9629e55c631ce9b1cc5fc3f7e89a4bbdadd5d77cd9cff0f6989 WatchSource:0}: Error finding container d6b502d9f0f9b9629e55c631ce9b1cc5fc3f7e89a4bbdadd5d77cd9cff0f6989: Status 404 returned error can't find the container with id d6b502d9f0f9b9629e55c631ce9b1cc5fc3f7e89a4bbdadd5d77cd9cff0f6989 Nov 25 08:28:56 crc kubenswrapper[4760]: I1125 08:28:56.555475 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.153:9311/healthcheck\": read tcp 10.217.0.2:60670->10.217.0.153:9311: read: connection reset by peer" Nov 25 08:28:56 crc kubenswrapper[4760]: I1125 08:28:56.555461 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.153:9311/healthcheck\": read tcp 10.217.0.2:60678->10.217.0.153:9311: read: connection reset by peer" Nov 25 08:28:56 crc kubenswrapper[4760]: I1125 08:28:56.603184 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:28:56 crc kubenswrapper[4760]: W1125 08:28:56.609219 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67b9bd30_6c4c_490c_a378_54c0ad55c528.slice/crio-b221b649adf95e3498a2bdbfcd0a465ea0daae79fcfb9cab6a41db21c4a2eb10 WatchSource:0}: Error finding container b221b649adf95e3498a2bdbfcd0a465ea0daae79fcfb9cab6a41db21c4a2eb10: Status 404 returned error can't find the container with id b221b649adf95e3498a2bdbfcd0a465ea0daae79fcfb9cab6a41db21c4a2eb10 Nov 25 08:28:56 crc kubenswrapper[4760]: I1125 08:28:56.698813 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.050058 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.062651 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-combined-ca-bundle\") pod \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.062747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkm99\" (UniqueName: \"kubernetes.io/projected/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-kube-api-access-pkm99\") pod \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.065795 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-logs\") pod \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.065926 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data-custom\") pod \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.065969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data\") pod \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\" (UID: \"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d\") " Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.066956 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-logs" (OuterVolumeSpecName: "logs") pod "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" (UID: "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.102621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-kube-api-access-pkm99" (OuterVolumeSpecName: "kube-api-access-pkm99") pod "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" (UID: "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d"). InnerVolumeSpecName "kube-api-access-pkm99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.112457 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" (UID: "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.170028 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.170066 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkm99\" (UniqueName: \"kubernetes.io/projected/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-kube-api-access-pkm99\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.170081 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.179678 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" (UID: "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.197810 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerID="d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc" exitCode=0 Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.197869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" event={"ID":"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d","Type":"ContainerDied","Data":"d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc"} Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.197896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" event={"ID":"ee81734e-089c-4bc2-9e9a-44c08ae4cb3d","Type":"ContainerDied","Data":"f286339edb7c86f8ff121a1e8d0aa40bdc4bb7f9fc50589f8c5ef1de740827eb"} Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.197914 4760 scope.go:117] "RemoveContainer" containerID="d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.198048 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64dc9dbb9b-7dhpt" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.207436 4760 generic.go:334] "Generic (PLEG): container finished" podID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerID="5fd0c5be99f9ee7b58b378465e5bd85b87036fb49b2c56a5e008b1ccb68c0533" exitCode=0 Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.207604 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775457b975-8dft4" event={"ID":"62f5081a-73e1-49f7-ac0a-d42c5271b6ba","Type":"ContainerDied","Data":"5fd0c5be99f9ee7b58b378465e5bd85b87036fb49b2c56a5e008b1ccb68c0533"} Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.207636 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775457b975-8dft4" event={"ID":"62f5081a-73e1-49f7-ac0a-d42c5271b6ba","Type":"ContainerStarted","Data":"d6b502d9f0f9b9629e55c631ce9b1cc5fc3f7e89a4bbdadd5d77cd9cff0f6989"} Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.207794 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data" (OuterVolumeSpecName: "config-data") pod "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" (UID: "ee81734e-089c-4bc2-9e9a-44c08ae4cb3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.213893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerStarted","Data":"96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16"} Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.220069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f476dd7-3de1-423a-a4d0-dd2639c40bf8","Type":"ContainerStarted","Data":"78f67d3590ae2ec00d6100e1c091729897b88a002e73eb9899545ee717c0e02e"} Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.221696 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67b9bd30-6c4c-490c-a378-54c0ad55c528","Type":"ContainerStarted","Data":"b221b649adf95e3498a2bdbfcd0a465ea0daae79fcfb9cab6a41db21c4a2eb10"} Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.242486 4760 scope.go:117] "RemoveContainer" containerID="47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.274592 4760 scope.go:117] "RemoveContainer" containerID="d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.274933 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.274955 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:28:57 crc kubenswrapper[4760]: E1125 08:28:57.280854 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc\": container with ID starting with d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc not found: ID does not exist" containerID="d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.280905 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc"} err="failed to get container status \"d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc\": rpc error: code = NotFound desc = could not find container \"d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc\": container with ID starting with d6dad289440e5cf3f8045ce9853387a5198460f0481ab413deeda5e5cc0acccc not found: ID does not exist" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.280936 4760 scope.go:117] "RemoveContainer" containerID="47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f" Nov 25 08:28:57 crc kubenswrapper[4760]: E1125 08:28:57.281690 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f\": container with ID starting with 47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f not found: ID does not exist" containerID="47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.281725 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f"} err="failed to get container status \"47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f\": rpc error: code = NotFound desc = could not find container \"47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f\": container with ID starting with 47657a5c4ceddf73d23d5d94462eada11f94d1a9462818c126e074911533d84f not found: ID does not exist" Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.553084 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64dc9dbb9b-7dhpt"] Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.565828 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-64dc9dbb9b-7dhpt"] Nov 25 08:28:57 crc kubenswrapper[4760]: I1125 08:28:57.997555 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:28:58 crc kubenswrapper[4760]: I1125 08:28:58.240518 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerStarted","Data":"a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687"} Nov 25 08:28:58 crc kubenswrapper[4760]: I1125 08:28:58.243875 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f476dd7-3de1-423a-a4d0-dd2639c40bf8","Type":"ContainerStarted","Data":"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d"} Nov 25 08:28:58 crc kubenswrapper[4760]: I1125 08:28:58.249227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775457b975-8dft4" event={"ID":"62f5081a-73e1-49f7-ac0a-d42c5271b6ba","Type":"ContainerStarted","Data":"98cd0eb2943555555085d3ee8dd81577d5bbfc87745d36e244837ce8b55fbb67"} Nov 25 08:28:58 crc kubenswrapper[4760]: I1125 08:28:58.249849 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:28:58 crc kubenswrapper[4760]: I1125 08:28:58.279170 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-775457b975-8dft4" podStartSLOduration=3.27914937 podStartE2EDuration="3.27914937s" podCreationTimestamp="2025-11-25 08:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:58.271511458 +0000 UTC m=+1071.980542253" watchObservedRunningTime="2025-11-25 08:28:58.27914937 +0000 UTC m=+1071.988180165" Nov 25 08:28:58 crc kubenswrapper[4760]: I1125 08:28:58.953379 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" path="/var/lib/kubelet/pods/ee81734e-089c-4bc2-9e9a-44c08ae4cb3d/volumes" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.267960 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67b9bd30-6c4c-490c-a378-54c0ad55c528","Type":"ContainerStarted","Data":"2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3"} Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.268365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67b9bd30-6c4c-490c-a378-54c0ad55c528","Type":"ContainerStarted","Data":"4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330"} Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.271656 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerStarted","Data":"66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a"} Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.273248 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f476dd7-3de1-423a-a4d0-dd2639c40bf8","Type":"ContainerStarted","Data":"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06"} Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.273498 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api-log" containerID="cri-o://7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d" gracePeriod=30 Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.273658 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api" containerID="cri-o://4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06" gracePeriod=30 Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.290582 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.260594845 podStartE2EDuration="4.290563294s" podCreationTimestamp="2025-11-25 08:28:55 +0000 UTC" firstStartedPulling="2025-11-25 08:28:56.613035023 +0000 UTC m=+1070.322065828" lastFinishedPulling="2025-11-25 08:28:57.643003482 +0000 UTC m=+1071.352034277" observedRunningTime="2025-11-25 08:28:59.289142215 +0000 UTC m=+1072.998173010" watchObservedRunningTime="2025-11-25 08:28:59.290563294 +0000 UTC m=+1072.999594089" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.323859 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.323843258 podStartE2EDuration="4.323843258s" podCreationTimestamp="2025-11-25 08:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:28:59.317591434 +0000 UTC m=+1073.026622229" watchObservedRunningTime="2025-11-25 08:28:59.323843258 +0000 UTC m=+1073.032874053" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.881810 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.933372 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-logs\") pod \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.933487 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdpgr\" (UniqueName: \"kubernetes.io/projected/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-kube-api-access-fdpgr\") pod \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.933525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-combined-ca-bundle\") pod \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.933595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-scripts\") pod \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.933665 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-etc-machine-id\") pod \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.933716 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data-custom\") pod \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.933747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data\") pod \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\" (UID: \"8f476dd7-3de1-423a-a4d0-dd2639c40bf8\") " Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.937092 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8f476dd7-3de1-423a-a4d0-dd2639c40bf8" (UID: "8f476dd7-3de1-423a-a4d0-dd2639c40bf8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.937750 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-logs" (OuterVolumeSpecName: "logs") pod "8f476dd7-3de1-423a-a4d0-dd2639c40bf8" (UID: "8f476dd7-3de1-423a-a4d0-dd2639c40bf8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.942048 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f476dd7-3de1-423a-a4d0-dd2639c40bf8" (UID: "8f476dd7-3de1-423a-a4d0-dd2639c40bf8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.943072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-kube-api-access-fdpgr" (OuterVolumeSpecName: "kube-api-access-fdpgr") pod "8f476dd7-3de1-423a-a4d0-dd2639c40bf8" (UID: "8f476dd7-3de1-423a-a4d0-dd2639c40bf8"). InnerVolumeSpecName "kube-api-access-fdpgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.979820 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f476dd7-3de1-423a-a4d0-dd2639c40bf8" (UID: "8f476dd7-3de1-423a-a4d0-dd2639c40bf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:28:59 crc kubenswrapper[4760]: I1125 08:28:59.979852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-scripts" (OuterVolumeSpecName: "scripts") pod "8f476dd7-3de1-423a-a4d0-dd2639c40bf8" (UID: "8f476dd7-3de1-423a-a4d0-dd2639c40bf8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.009801 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data" (OuterVolumeSpecName: "config-data") pod "8f476dd7-3de1-423a-a4d0-dd2639c40bf8" (UID: "8f476dd7-3de1-423a-a4d0-dd2639c40bf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.038513 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.038564 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdpgr\" (UniqueName: \"kubernetes.io/projected/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-kube-api-access-fdpgr\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.038582 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.038595 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.038606 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.038616 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.038626 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f476dd7-3de1-423a-a4d0-dd2639c40bf8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.287273 4760 generic.go:334] "Generic (PLEG): container finished" podID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerID="4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06" exitCode=0 Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.287313 4760 generic.go:334] "Generic (PLEG): container finished" podID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerID="7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d" exitCode=143 Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.287338 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.287343 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f476dd7-3de1-423a-a4d0-dd2639c40bf8","Type":"ContainerDied","Data":"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06"} Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.287430 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f476dd7-3de1-423a-a4d0-dd2639c40bf8","Type":"ContainerDied","Data":"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d"} Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.287447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f476dd7-3de1-423a-a4d0-dd2639c40bf8","Type":"ContainerDied","Data":"78f67d3590ae2ec00d6100e1c091729897b88a002e73eb9899545ee717c0e02e"} Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.287490 4760 scope.go:117] "RemoveContainer" containerID="4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.323466 4760 scope.go:117] "RemoveContainer" containerID="7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.325478 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.341734 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.354999 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:29:00 crc kubenswrapper[4760]: E1125 08:29:00.355427 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355439 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api" Nov 25 08:29:00 crc kubenswrapper[4760]: E1125 08:29:00.355451 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355457 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api" Nov 25 08:29:00 crc kubenswrapper[4760]: E1125 08:29:00.355520 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api-log" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api-log" Nov 25 08:29:00 crc kubenswrapper[4760]: E1125 08:29:00.355542 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api-log" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355549 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api-log" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355746 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355768 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee81734e-089c-4bc2-9e9a-44c08ae4cb3d" containerName="barbican-api-log" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355778 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.355787 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" containerName="cinder-api-log" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.357739 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.361476 4760 scope.go:117] "RemoveContainer" containerID="4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06" Nov 25 08:29:00 crc kubenswrapper[4760]: E1125 08:29:00.362055 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06\": container with ID starting with 4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06 not found: ID does not exist" containerID="4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.362132 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06"} err="failed to get container status \"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06\": rpc error: code = NotFound desc = could not find container \"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06\": container with ID starting with 4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06 not found: ID does not exist" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.362161 4760 scope.go:117] "RemoveContainer" containerID="7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.362235 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.365926 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.366105 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 08:29:00 crc kubenswrapper[4760]: E1125 08:29:00.366223 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d\": container with ID starting with 7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d not found: ID does not exist" containerID="7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.366243 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d"} err="failed to get container status \"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d\": rpc error: code = NotFound desc = could not find container \"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d\": container with ID starting with 7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d not found: ID does not exist" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.366279 4760 scope.go:117] "RemoveContainer" containerID="4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.366583 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.369891 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06"} err="failed to get container status \"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06\": rpc error: code = NotFound desc = could not find container \"4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06\": container with ID starting with 4c9a6a17841e335327c9ca8e362a802b6b2dc1758d0369af395c0ff50facbd06 not found: ID does not exist" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.369925 4760 scope.go:117] "RemoveContainer" containerID="7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.370141 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d"} err="failed to get container status \"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d\": rpc error: code = NotFound desc = could not find container \"7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d\": container with ID starting with 7ff45b775682ebe8f90570fdb0d0bd23597192a8ac0a37c2af80588f982cef9d not found: ID does not exist" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447299 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-config-data\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qswd8\" (UniqueName: \"kubernetes.io/projected/c0a8e435-6d04-48d6-b723-252b8358b055-kube-api-access-qswd8\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0a8e435-6d04-48d6-b723-252b8358b055-logs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447389 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447439 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447464 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0a8e435-6d04-48d6-b723-252b8358b055-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.447505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-scripts\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.548647 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-config-data\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qswd8\" (UniqueName: \"kubernetes.io/projected/c0a8e435-6d04-48d6-b723-252b8358b055-kube-api-access-qswd8\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0a8e435-6d04-48d6-b723-252b8358b055-logs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549258 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0a8e435-6d04-48d6-b723-252b8358b055-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-scripts\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.549467 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0a8e435-6d04-48d6-b723-252b8358b055-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.550219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0a8e435-6d04-48d6-b723-252b8358b055-logs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.553138 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.553578 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-config-data\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.553627 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.554048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-scripts\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.554777 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-config-data-custom\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.554926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0a8e435-6d04-48d6-b723-252b8358b055-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.568858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qswd8\" (UniqueName: \"kubernetes.io/projected/c0a8e435-6d04-48d6-b723-252b8358b055-kube-api-access-qswd8\") pod \"cinder-api-0\" (UID: \"c0a8e435-6d04-48d6-b723-252b8358b055\") " pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.678839 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.773738 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 08:29:00 crc kubenswrapper[4760]: I1125 08:29:00.956437 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f476dd7-3de1-423a-a4d0-dd2639c40bf8" path="/var/lib/kubelet/pods/8f476dd7-3de1-423a-a4d0-dd2639c40bf8/volumes" Nov 25 08:29:01 crc kubenswrapper[4760]: I1125 08:29:01.198386 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 08:29:01 crc kubenswrapper[4760]: I1125 08:29:01.307630 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerStarted","Data":"37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426"} Nov 25 08:29:01 crc kubenswrapper[4760]: I1125 08:29:01.310355 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:29:01 crc kubenswrapper[4760]: I1125 08:29:01.311081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0a8e435-6d04-48d6-b723-252b8358b055","Type":"ContainerStarted","Data":"3064ad122dbebfee035439a21a0d2e285862a31da10a982cfab79cec541732f4"} Nov 25 08:29:01 crc kubenswrapper[4760]: I1125 08:29:01.344818 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.359739721 podStartE2EDuration="7.344792954s" podCreationTimestamp="2025-11-25 08:28:54 +0000 UTC" firstStartedPulling="2025-11-25 08:28:55.252118937 +0000 UTC m=+1068.961149732" lastFinishedPulling="2025-11-25 08:29:00.23717216 +0000 UTC m=+1073.946202965" observedRunningTime="2025-11-25 08:29:01.341851872 +0000 UTC m=+1075.050882677" watchObservedRunningTime="2025-11-25 08:29:01.344792954 +0000 UTC m=+1075.053823749" Nov 25 08:29:02 crc kubenswrapper[4760]: I1125 08:29:02.102998 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:29:02 crc kubenswrapper[4760]: I1125 08:29:02.346050 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0a8e435-6d04-48d6-b723-252b8358b055","Type":"ContainerStarted","Data":"a0e243db45719d0b60dced41f84c92ab822328f13bb8060f9f893b3c93041bee"} Nov 25 08:29:03 crc kubenswrapper[4760]: I1125 08:29:03.355306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c0a8e435-6d04-48d6-b723-252b8358b055","Type":"ContainerStarted","Data":"372282c7b634afa6ea0d31a2ae0eda0138f95a9ab4e90eecd29b9d9a88b2ade2"} Nov 25 08:29:03 crc kubenswrapper[4760]: I1125 08:29:03.355609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 08:29:03 crc kubenswrapper[4760]: I1125 08:29:03.381311 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.381288863 podStartE2EDuration="3.381288863s" podCreationTimestamp="2025-11-25 08:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:29:03.374038411 +0000 UTC m=+1077.083069206" watchObservedRunningTime="2025-11-25 08:29:03.381288863 +0000 UTC m=+1077.090319668" Nov 25 08:29:03 crc kubenswrapper[4760]: I1125 08:29:03.697297 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:29:03 crc kubenswrapper[4760]: I1125 08:29:03.826365 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:29:04 crc kubenswrapper[4760]: I1125 08:29:04.687651 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-564c475cd5-6wg66" Nov 25 08:29:04 crc kubenswrapper[4760]: I1125 08:29:04.817027 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ff756f59b-f8nvt"] Nov 25 08:29:04 crc kubenswrapper[4760]: I1125 08:29:04.824512 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ff756f59b-f8nvt" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-api" containerID="cri-o://ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf" gracePeriod=30 Nov 25 08:29:04 crc kubenswrapper[4760]: I1125 08:29:04.824928 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ff756f59b-f8nvt" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-httpd" containerID="cri-o://6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961" gracePeriod=30 Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.379823 4760 generic.go:334] "Generic (PLEG): container finished" podID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerID="6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961" exitCode=0 Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.379895 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ff756f59b-f8nvt" event={"ID":"e6613503-bc56-448f-aa4a-ef1e4003bfb2","Type":"ContainerDied","Data":"6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961"} Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.806422 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.823088 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6655684d54-8jfvz" Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.878575 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844b557b9c-qhcjl"] Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.878997 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" podUID="8499ed65-d46c-4e61-b113-06350f33838c" containerName="dnsmasq-dns" containerID="cri-o://d8f23495c8b054fd6d85cdbf7fabc899422278942bc52c0d6cc7d1e6c30a9404" gracePeriod=10 Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.898829 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:29:05 crc kubenswrapper[4760]: I1125 08:29:05.974636 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b7dd9bf58-zdxgq"] Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.140624 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.207881 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.393020 4760 generic.go:334] "Generic (PLEG): container finished" podID="8499ed65-d46c-4e61-b113-06350f33838c" containerID="d8f23495c8b054fd6d85cdbf7fabc899422278942bc52c0d6cc7d1e6c30a9404" exitCode=0 Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.393268 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b7dd9bf58-zdxgq" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon-log" containerID="cri-o://d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32" gracePeriod=30 Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.393521 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" event={"ID":"8499ed65-d46c-4e61-b113-06350f33838c","Type":"ContainerDied","Data":"d8f23495c8b054fd6d85cdbf7fabc899422278942bc52c0d6cc7d1e6c30a9404"} Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.393674 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="cinder-scheduler" containerID="cri-o://4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330" gracePeriod=30 Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.393969 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b7dd9bf58-zdxgq" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" containerID="cri-o://fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d" gracePeriod=30 Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.394042 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="probe" containerID="cri-o://2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3" gracePeriod=30 Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.474598 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.628138 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5cv\" (UniqueName: \"kubernetes.io/projected/8499ed65-d46c-4e61-b113-06350f33838c-kube-api-access-nk5cv\") pod \"8499ed65-d46c-4e61-b113-06350f33838c\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.628238 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-nb\") pod \"8499ed65-d46c-4e61-b113-06350f33838c\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.628322 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-dns-svc\") pod \"8499ed65-d46c-4e61-b113-06350f33838c\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.628390 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-config\") pod \"8499ed65-d46c-4e61-b113-06350f33838c\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.628528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-sb\") pod \"8499ed65-d46c-4e61-b113-06350f33838c\" (UID: \"8499ed65-d46c-4e61-b113-06350f33838c\") " Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.643506 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8499ed65-d46c-4e61-b113-06350f33838c-kube-api-access-nk5cv" (OuterVolumeSpecName: "kube-api-access-nk5cv") pod "8499ed65-d46c-4e61-b113-06350f33838c" (UID: "8499ed65-d46c-4e61-b113-06350f33838c"). InnerVolumeSpecName "kube-api-access-nk5cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.691425 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8499ed65-d46c-4e61-b113-06350f33838c" (UID: "8499ed65-d46c-4e61-b113-06350f33838c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.699639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8499ed65-d46c-4e61-b113-06350f33838c" (UID: "8499ed65-d46c-4e61-b113-06350f33838c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.720411 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-config" (OuterVolumeSpecName: "config") pod "8499ed65-d46c-4e61-b113-06350f33838c" (UID: "8499ed65-d46c-4e61-b113-06350f33838c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.720601 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8499ed65-d46c-4e61-b113-06350f33838c" (UID: "8499ed65-d46c-4e61-b113-06350f33838c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.730287 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk5cv\" (UniqueName: \"kubernetes.io/projected/8499ed65-d46c-4e61-b113-06350f33838c-kube-api-access-nk5cv\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.730331 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.730347 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.730359 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:06 crc kubenswrapper[4760]: I1125 08:29:06.730370 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8499ed65-d46c-4e61-b113-06350f33838c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.402633 4760 generic.go:334] "Generic (PLEG): container finished" podID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerID="2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3" exitCode=0 Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.402703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67b9bd30-6c4c-490c-a378-54c0ad55c528","Type":"ContainerDied","Data":"2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3"} Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.404853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" event={"ID":"8499ed65-d46c-4e61-b113-06350f33838c","Type":"ContainerDied","Data":"0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377"} Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.404892 4760 scope.go:117] "RemoveContainer" containerID="d8f23495c8b054fd6d85cdbf7fabc899422278942bc52c0d6cc7d1e6c30a9404" Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.404917 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b557b9c-qhcjl" Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.445574 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844b557b9c-qhcjl"] Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.449424 4760 scope.go:117] "RemoveContainer" containerID="825259b322bdf7c811e153002b4235f40936303b06472afe3162f71f7da5b6b9" Nov 25 08:29:07 crc kubenswrapper[4760]: I1125 08:29:07.452640 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-844b557b9c-qhcjl"] Nov 25 08:29:08 crc kubenswrapper[4760]: I1125 08:29:08.823476 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:29:08 crc kubenswrapper[4760]: I1125 08:29:08.841011 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-598d8454cd-s4vpx" Nov 25 08:29:08 crc kubenswrapper[4760]: I1125 08:29:08.948963 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8499ed65-d46c-4e61-b113-06350f33838c" path="/var/lib/kubelet/pods/8499ed65-d46c-4e61-b113-06350f33838c/volumes" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.004290 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.091950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-scripts\") pod \"67b9bd30-6c4c-490c-a378-54c0ad55c528\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.092002 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data\") pod \"67b9bd30-6c4c-490c-a378-54c0ad55c528\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.092067 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data-custom\") pod \"67b9bd30-6c4c-490c-a378-54c0ad55c528\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.092736 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-combined-ca-bundle\") pod \"67b9bd30-6c4c-490c-a378-54c0ad55c528\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.092894 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtwx2\" (UniqueName: \"kubernetes.io/projected/67b9bd30-6c4c-490c-a378-54c0ad55c528-kube-api-access-gtwx2\") pod \"67b9bd30-6c4c-490c-a378-54c0ad55c528\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.092950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67b9bd30-6c4c-490c-a378-54c0ad55c528-etc-machine-id\") pod \"67b9bd30-6c4c-490c-a378-54c0ad55c528\" (UID: \"67b9bd30-6c4c-490c-a378-54c0ad55c528\") " Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.098570 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67b9bd30-6c4c-490c-a378-54c0ad55c528-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67b9bd30-6c4c-490c-a378-54c0ad55c528" (UID: "67b9bd30-6c4c-490c-a378-54c0ad55c528"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.099110 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67b9bd30-6c4c-490c-a378-54c0ad55c528" (UID: "67b9bd30-6c4c-490c-a378-54c0ad55c528"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.115099 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-scripts" (OuterVolumeSpecName: "scripts") pod "67b9bd30-6c4c-490c-a378-54c0ad55c528" (UID: "67b9bd30-6c4c-490c-a378-54c0ad55c528"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.115168 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b9bd30-6c4c-490c-a378-54c0ad55c528-kube-api-access-gtwx2" (OuterVolumeSpecName: "kube-api-access-gtwx2") pod "67b9bd30-6c4c-490c-a378-54c0ad55c528" (UID: "67b9bd30-6c4c-490c-a378-54c0ad55c528"). InnerVolumeSpecName "kube-api-access-gtwx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.155175 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67b9bd30-6c4c-490c-a378-54c0ad55c528" (UID: "67b9bd30-6c4c-490c-a378-54c0ad55c528"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.163051 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69cbccbbcc-v8kx4" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.197991 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.198037 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.198052 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.198067 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtwx2\" (UniqueName: \"kubernetes.io/projected/67b9bd30-6c4c-490c-a378-54c0ad55c528-kube-api-access-gtwx2\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.198080 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67b9bd30-6c4c-490c-a378-54c0ad55c528-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.212929 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data" (OuterVolumeSpecName: "config-data") pod "67b9bd30-6c4c-490c-a378-54c0ad55c528" (UID: "67b9bd30-6c4c-490c-a378-54c0ad55c528"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.300128 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67b9bd30-6c4c-490c-a378-54c0ad55c528-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.430410 4760 generic.go:334] "Generic (PLEG): container finished" podID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerID="4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330" exitCode=0 Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.430610 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67b9bd30-6c4c-490c-a378-54c0ad55c528","Type":"ContainerDied","Data":"4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330"} Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.430725 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.430877 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67b9bd30-6c4c-490c-a378-54c0ad55c528","Type":"ContainerDied","Data":"b221b649adf95e3498a2bdbfcd0a465ea0daae79fcfb9cab6a41db21c4a2eb10"} Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.430952 4760 scope.go:117] "RemoveContainer" containerID="2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.435360 4760 generic.go:334] "Generic (PLEG): container finished" podID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerID="fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d" exitCode=0 Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.435465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7dd9bf58-zdxgq" event={"ID":"fed86ba5-c330-411e-bab0-88e86ceb8980","Type":"ContainerDied","Data":"fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d"} Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.462533 4760 scope.go:117] "RemoveContainer" containerID="4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.467641 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.478108 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.511375 4760 scope.go:117] "RemoveContainer" containerID="2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3" Nov 25 08:29:10 crc kubenswrapper[4760]: E1125 08:29:10.512028 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3\": container with ID starting with 2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3 not found: ID does not exist" containerID="2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.512067 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3"} err="failed to get container status \"2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3\": rpc error: code = NotFound desc = could not find container \"2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3\": container with ID starting with 2a8cd21ce61ff0b05a7ce871519804fc4027c4d082d438a548d12ac73e3c3fe3 not found: ID does not exist" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.512096 4760 scope.go:117] "RemoveContainer" containerID="4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330" Nov 25 08:29:10 crc kubenswrapper[4760]: E1125 08:29:10.512829 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330\": container with ID starting with 4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330 not found: ID does not exist" containerID="4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.512857 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330"} err="failed to get container status \"4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330\": rpc error: code = NotFound desc = could not find container \"4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330\": container with ID starting with 4b30cdd25ec21ae6ede3695ab4029cf5083b8c2a76be0a4e3902160358696330 not found: ID does not exist" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.518231 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:29:10 crc kubenswrapper[4760]: E1125 08:29:10.519896 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8499ed65-d46c-4e61-b113-06350f33838c" containerName="dnsmasq-dns" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.519943 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8499ed65-d46c-4e61-b113-06350f33838c" containerName="dnsmasq-dns" Nov 25 08:29:10 crc kubenswrapper[4760]: E1125 08:29:10.519963 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8499ed65-d46c-4e61-b113-06350f33838c" containerName="init" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.520047 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8499ed65-d46c-4e61-b113-06350f33838c" containerName="init" Nov 25 08:29:10 crc kubenswrapper[4760]: E1125 08:29:10.520079 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="cinder-scheduler" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.520088 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="cinder-scheduler" Nov 25 08:29:10 crc kubenswrapper[4760]: E1125 08:29:10.520127 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="probe" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.520136 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="probe" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.524800 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="probe" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.524884 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" containerName="cinder-scheduler" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.524902 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8499ed65-d46c-4e61-b113-06350f33838c" containerName="dnsmasq-dns" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.526955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.531060 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.533545 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.613618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.613830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.613915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j62c\" (UniqueName: \"kubernetes.io/projected/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-kube-api-access-5j62c\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.613986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.614016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.614234 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.717087 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.717133 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j62c\" (UniqueName: \"kubernetes.io/projected/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-kube-api-access-5j62c\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.717164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.717182 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.717246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.717393 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.717608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.722038 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.722447 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-config-data\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.722918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.724766 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-scripts\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.735824 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j62c\" (UniqueName: \"kubernetes.io/projected/f4e64f72-cbdd-44dc-9c1f-21b88eae9288-kube-api-access-5j62c\") pod \"cinder-scheduler-0\" (UID: \"f4e64f72-cbdd-44dc-9c1f-21b88eae9288\") " pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.892001 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 08:29:10 crc kubenswrapper[4760]: I1125 08:29:10.965146 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b9bd30-6c4c-490c-a378-54c0ad55c528" path="/var/lib/kubelet/pods/67b9bd30-6c4c-490c-a378-54c0ad55c528/volumes" Nov 25 08:29:11 crc kubenswrapper[4760]: E1125 08:29:11.472500 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499ed65_d46c_4e61_b113_06350f33838c.slice/crio-0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377\": RecentStats: unable to find data in memory cache]" Nov 25 08:29:11 crc kubenswrapper[4760]: I1125 08:29:11.489167 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 08:29:11 crc kubenswrapper[4760]: I1125 08:29:11.549515 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b7dd9bf58-zdxgq" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.290651 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.292525 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.295665 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.295922 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.297157 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-92tdz" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.328687 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.454946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-openstack-config\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.455317 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-openstack-config-secret\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.455471 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.455630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7tw\" (UniqueName: \"kubernetes.io/projected/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-kube-api-access-gg7tw\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.466484 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4e64f72-cbdd-44dc-9c1f-21b88eae9288","Type":"ContainerStarted","Data":"9e6feea424bf99561f7f70a554a96eb02bfb7c63975e8ba2451d25513143432d"} Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.466761 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4e64f72-cbdd-44dc-9c1f-21b88eae9288","Type":"ContainerStarted","Data":"db3e81a8c1a8e6013ac89191a6205f79cd348002dab914215e22eb8e116d22bc"} Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.557187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-openstack-config-secret\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.557672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.557741 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7tw\" (UniqueName: \"kubernetes.io/projected/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-kube-api-access-gg7tw\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.557807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-openstack-config\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.558781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-openstack-config\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.562201 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-openstack-config-secret\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.568983 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.587015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7tw\" (UniqueName: \"kubernetes.io/projected/9df819bd-2ca5-4dd0-9409-e8d6e9a80b93-kube-api-access-gg7tw\") pod \"openstackclient\" (UID: \"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93\") " pod="openstack/openstackclient" Nov 25 08:29:12 crc kubenswrapper[4760]: I1125 08:29:12.635589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 08:29:13 crc kubenswrapper[4760]: I1125 08:29:13.340520 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 08:29:13 crc kubenswrapper[4760]: I1125 08:29:13.407213 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 08:29:13 crc kubenswrapper[4760]: I1125 08:29:13.484127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f4e64f72-cbdd-44dc-9c1f-21b88eae9288","Type":"ContainerStarted","Data":"30897fc2dd001b043e8e4771123d820042779adea3c0d334a1b00bf855fde690"} Nov 25 08:29:13 crc kubenswrapper[4760]: I1125 08:29:13.491183 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93","Type":"ContainerStarted","Data":"6c1edaf70252f973b0d4b818ff764cd701fdee71a2b190c356dcc4265deead7b"} Nov 25 08:29:13 crc kubenswrapper[4760]: I1125 08:29:13.523088 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.523072272 podStartE2EDuration="3.523072272s" podCreationTimestamp="2025-11-25 08:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:29:13.520912182 +0000 UTC m=+1087.229942977" watchObservedRunningTime="2025-11-25 08:29:13.523072272 +0000 UTC m=+1087.232103067" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.336304 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.501901 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbtt2\" (UniqueName: \"kubernetes.io/projected/e6613503-bc56-448f-aa4a-ef1e4003bfb2-kube-api-access-sbtt2\") pod \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.501995 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-config\") pod \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.502038 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-combined-ca-bundle\") pod \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.502084 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-httpd-config\") pod \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.502197 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-ovndb-tls-certs\") pod \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\" (UID: \"e6613503-bc56-448f-aa4a-ef1e4003bfb2\") " Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.510434 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e6613503-bc56-448f-aa4a-ef1e4003bfb2" (UID: "e6613503-bc56-448f-aa4a-ef1e4003bfb2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.512317 4760 generic.go:334] "Generic (PLEG): container finished" podID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerID="ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf" exitCode=0 Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.512397 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6613503-bc56-448f-aa4a-ef1e4003bfb2-kube-api-access-sbtt2" (OuterVolumeSpecName: "kube-api-access-sbtt2") pod "e6613503-bc56-448f-aa4a-ef1e4003bfb2" (UID: "e6613503-bc56-448f-aa4a-ef1e4003bfb2"). InnerVolumeSpecName "kube-api-access-sbtt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.512522 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ff756f59b-f8nvt" event={"ID":"e6613503-bc56-448f-aa4a-ef1e4003bfb2","Type":"ContainerDied","Data":"ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf"} Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.512590 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ff756f59b-f8nvt" event={"ID":"e6613503-bc56-448f-aa4a-ef1e4003bfb2","Type":"ContainerDied","Data":"6a1e81b6a71dc793fb7ef46d2991e548a60a0590ffcd6380f623f68e53d92369"} Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.512614 4760 scope.go:117] "RemoveContainer" containerID="6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.513534 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ff756f59b-f8nvt" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.605539 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbtt2\" (UniqueName: \"kubernetes.io/projected/e6613503-bc56-448f-aa4a-ef1e4003bfb2-kube-api-access-sbtt2\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.605573 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.614451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-config" (OuterVolumeSpecName: "config") pod "e6613503-bc56-448f-aa4a-ef1e4003bfb2" (UID: "e6613503-bc56-448f-aa4a-ef1e4003bfb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.635728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6613503-bc56-448f-aa4a-ef1e4003bfb2" (UID: "e6613503-bc56-448f-aa4a-ef1e4003bfb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.669151 4760 scope.go:117] "RemoveContainer" containerID="ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.698382 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e6613503-bc56-448f-aa4a-ef1e4003bfb2" (UID: "e6613503-bc56-448f-aa4a-ef1e4003bfb2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.707582 4760 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.707619 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.707629 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6613503-bc56-448f-aa4a-ef1e4003bfb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.719406 4760 scope.go:117] "RemoveContainer" containerID="6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961" Nov 25 08:29:14 crc kubenswrapper[4760]: E1125 08:29:14.720798 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961\": container with ID starting with 6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961 not found: ID does not exist" containerID="6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.720845 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961"} err="failed to get container status \"6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961\": rpc error: code = NotFound desc = could not find container \"6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961\": container with ID starting with 6cd875c81b893695248c483cf4512d4dd8035feb5af20193d98c703d324bf961 not found: ID does not exist" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.720876 4760 scope.go:117] "RemoveContainer" containerID="ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf" Nov 25 08:29:14 crc kubenswrapper[4760]: E1125 08:29:14.721233 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf\": container with ID starting with ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf not found: ID does not exist" containerID="ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.721275 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf"} err="failed to get container status \"ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf\": rpc error: code = NotFound desc = could not find container \"ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf\": container with ID starting with ca3c9b932f12f3b40f5e5391c4679114ea82cf315d61cd6680b4c7aef30e0daf not found: ID does not exist" Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.847135 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ff756f59b-f8nvt"] Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.854875 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7ff756f59b-f8nvt"] Nov 25 08:29:14 crc kubenswrapper[4760]: I1125 08:29:14.951365 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" path="/var/lib/kubelet/pods/e6613503-bc56-448f-aa4a-ef1e4003bfb2/volumes" Nov 25 08:29:15 crc kubenswrapper[4760]: I1125 08:29:15.892496 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 08:29:21 crc kubenswrapper[4760]: I1125 08:29:21.129638 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 08:29:21 crc kubenswrapper[4760]: I1125 08:29:21.549452 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b7dd9bf58-zdxgq" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 08:29:21 crc kubenswrapper[4760]: E1125 08:29:21.755546 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499ed65_d46c_4e61_b113_06350f33838c.slice/crio-0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377\": RecentStats: unable to find data in memory cache]" Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.373015 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.373827 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-central-agent" containerID="cri-o://96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16" gracePeriod=30 Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.374320 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="proxy-httpd" containerID="cri-o://37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426" gracePeriod=30 Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.374398 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="sg-core" containerID="cri-o://66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a" gracePeriod=30 Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.374444 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-notification-agent" containerID="cri-o://a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687" gracePeriod=30 Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.407529 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.155:3000/\": EOF" Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.614792 4760 generic.go:334] "Generic (PLEG): container finished" podID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerID="37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426" exitCode=0 Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.614824 4760 generic.go:334] "Generic (PLEG): container finished" podID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerID="66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a" exitCode=2 Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.614881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerDied","Data":"37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426"} Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.614920 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerDied","Data":"66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a"} Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.617002 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9df819bd-2ca5-4dd0-9409-e8d6e9a80b93","Type":"ContainerStarted","Data":"3c806e440e70c43dd502af97646e047cf316ce34e49bfb39583b62838b0bbaaa"} Nov 25 08:29:23 crc kubenswrapper[4760]: I1125 08:29:23.639525 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.384235953 podStartE2EDuration="11.639504747s" podCreationTimestamp="2025-11-25 08:29:12 +0000 UTC" firstStartedPulling="2025-11-25 08:29:13.349997818 +0000 UTC m=+1087.059028613" lastFinishedPulling="2025-11-25 08:29:22.605266612 +0000 UTC m=+1096.314297407" observedRunningTime="2025-11-25 08:29:23.630338983 +0000 UTC m=+1097.339369778" watchObservedRunningTime="2025-11-25 08:29:23.639504747 +0000 UTC m=+1097.348535542" Nov 25 08:29:24 crc kubenswrapper[4760]: I1125 08:29:24.628205 4760 generic.go:334] "Generic (PLEG): container finished" podID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerID="96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16" exitCode=0 Nov 25 08:29:24 crc kubenswrapper[4760]: I1125 08:29:24.628308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerDied","Data":"96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16"} Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.037503 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.086758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-combined-ca-bundle\") pod \"30ead1cc-7ac6-4208-ba63-d5e41160e015\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.086815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-config-data\") pod \"30ead1cc-7ac6-4208-ba63-d5e41160e015\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.086888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-log-httpd\") pod \"30ead1cc-7ac6-4208-ba63-d5e41160e015\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.086932 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-run-httpd\") pod \"30ead1cc-7ac6-4208-ba63-d5e41160e015\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.087015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw8vx\" (UniqueName: \"kubernetes.io/projected/30ead1cc-7ac6-4208-ba63-d5e41160e015-kube-api-access-qw8vx\") pod \"30ead1cc-7ac6-4208-ba63-d5e41160e015\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.087045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-scripts\") pod \"30ead1cc-7ac6-4208-ba63-d5e41160e015\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.087087 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-sg-core-conf-yaml\") pod \"30ead1cc-7ac6-4208-ba63-d5e41160e015\" (UID: \"30ead1cc-7ac6-4208-ba63-d5e41160e015\") " Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.087560 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30ead1cc-7ac6-4208-ba63-d5e41160e015" (UID: "30ead1cc-7ac6-4208-ba63-d5e41160e015"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.087666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30ead1cc-7ac6-4208-ba63-d5e41160e015" (UID: "30ead1cc-7ac6-4208-ba63-d5e41160e015"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.087836 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.087862 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30ead1cc-7ac6-4208-ba63-d5e41160e015-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.115449 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ead1cc-7ac6-4208-ba63-d5e41160e015-kube-api-access-qw8vx" (OuterVolumeSpecName: "kube-api-access-qw8vx") pod "30ead1cc-7ac6-4208-ba63-d5e41160e015" (UID: "30ead1cc-7ac6-4208-ba63-d5e41160e015"). InnerVolumeSpecName "kube-api-access-qw8vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.123387 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-scripts" (OuterVolumeSpecName: "scripts") pod "30ead1cc-7ac6-4208-ba63-d5e41160e015" (UID: "30ead1cc-7ac6-4208-ba63-d5e41160e015"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.137703 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30ead1cc-7ac6-4208-ba63-d5e41160e015" (UID: "30ead1cc-7ac6-4208-ba63-d5e41160e015"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.180005 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30ead1cc-7ac6-4208-ba63-d5e41160e015" (UID: "30ead1cc-7ac6-4208-ba63-d5e41160e015"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.189372 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw8vx\" (UniqueName: \"kubernetes.io/projected/30ead1cc-7ac6-4208-ba63-d5e41160e015-kube-api-access-qw8vx\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.189398 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.189407 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.189420 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.215389 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-config-data" (OuterVolumeSpecName: "config-data") pod "30ead1cc-7ac6-4208-ba63-d5e41160e015" (UID: "30ead1cc-7ac6-4208-ba63-d5e41160e015"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.291139 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ead1cc-7ac6-4208-ba63-d5e41160e015-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.638487 4760 generic.go:334] "Generic (PLEG): container finished" podID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerID="a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687" exitCode=0 Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.638535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerDied","Data":"a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687"} Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.638566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30ead1cc-7ac6-4208-ba63-d5e41160e015","Type":"ContainerDied","Data":"0a887ef43879097646e4d0faf174058c0ee151133d55aa5ccc00265dc2e19d86"} Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.638588 4760 scope.go:117] "RemoveContainer" containerID="37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.638741 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.671681 4760 scope.go:117] "RemoveContainer" containerID="66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.674152 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.689216 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.703971 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.704543 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="sg-core" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704571 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="sg-core" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.704603 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-httpd" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704611 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-httpd" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.704622 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-api" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704628 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-api" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.704639 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-notification-agent" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704645 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-notification-agent" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.704670 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-central-agent" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704677 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-central-agent" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.704686 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="proxy-httpd" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704692 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="proxy-httpd" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704882 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="proxy-httpd" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704903 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-notification-agent" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704913 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-api" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704928 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="ceilometer-central-agent" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704938 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6613503-bc56-448f-aa4a-ef1e4003bfb2" containerName="neutron-httpd" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.704947 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="sg-core" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.706872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.707715 4760 scope.go:117] "RemoveContainer" containerID="a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.709008 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.709340 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.726472 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.736749 4760 scope.go:117] "RemoveContainer" containerID="96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.756915 4760 scope.go:117] "RemoveContainer" containerID="37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.757519 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426\": container with ID starting with 37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426 not found: ID does not exist" containerID="37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.757578 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426"} err="failed to get container status \"37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426\": rpc error: code = NotFound desc = could not find container \"37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426\": container with ID starting with 37315bcde1e69de8b9061a4c5dcbb3fa9109819f6df17d8c80c113b2c0a0c426 not found: ID does not exist" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.757611 4760 scope.go:117] "RemoveContainer" containerID="66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.758015 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a\": container with ID starting with 66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a not found: ID does not exist" containerID="66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.758042 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a"} err="failed to get container status \"66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a\": rpc error: code = NotFound desc = could not find container \"66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a\": container with ID starting with 66bd4fcf4cc5231ff950286d2f30718b07437ec84aa7466e6628bc624fb9df3a not found: ID does not exist" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.758059 4760 scope.go:117] "RemoveContainer" containerID="a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.758339 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687\": container with ID starting with a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687 not found: ID does not exist" containerID="a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.758367 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687"} err="failed to get container status \"a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687\": rpc error: code = NotFound desc = could not find container \"a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687\": container with ID starting with a48ed76659d7d4454abe00317d28681902d85ce741075955dd7b9167555bc687 not found: ID does not exist" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.758381 4760 scope.go:117] "RemoveContainer" containerID="96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16" Nov 25 08:29:25 crc kubenswrapper[4760]: E1125 08:29:25.758759 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16\": container with ID starting with 96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16 not found: ID does not exist" containerID="96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.758845 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16"} err="failed to get container status \"96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16\": rpc error: code = NotFound desc = could not find container \"96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16\": container with ID starting with 96d6cb53db47def92b60a6a42434ebd4dbf17fe38df38d3dcfce4181ed81be16 not found: ID does not exist" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.801004 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvkj\" (UniqueName: \"kubernetes.io/projected/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-kube-api-access-4bvkj\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.801076 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.801116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-log-httpd\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.801146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.801166 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-scripts\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.801190 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-config-data\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.801266 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-run-httpd\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.902871 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bvkj\" (UniqueName: \"kubernetes.io/projected/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-kube-api-access-4bvkj\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.902919 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.902949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-log-httpd\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.902967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.902982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-scripts\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.903000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-config-data\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.903041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-run-httpd\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.903603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-run-httpd\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.904486 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-log-httpd\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.908104 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-config-data\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.908119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.908529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.909432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-scripts\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:25 crc kubenswrapper[4760]: I1125 08:29:25.925621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bvkj\" (UniqueName: \"kubernetes.io/projected/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-kube-api-access-4bvkj\") pod \"ceilometer-0\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " pod="openstack/ceilometer-0" Nov 25 08:29:26 crc kubenswrapper[4760]: I1125 08:29:26.028764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:26 crc kubenswrapper[4760]: I1125 08:29:26.470789 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:26 crc kubenswrapper[4760]: I1125 08:29:26.659767 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerStarted","Data":"f06904bbbf0ee636399184fc05decdc1eb11675ea2fccda81f79c7ec4c58f909"} Nov 25 08:29:26 crc kubenswrapper[4760]: I1125 08:29:26.954411 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" path="/var/lib/kubelet/pods/30ead1cc-7ac6-4208-ba63-d5e41160e015/volumes" Nov 25 08:29:27 crc kubenswrapper[4760]: I1125 08:29:27.370396 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:27 crc kubenswrapper[4760]: I1125 08:29:27.681458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerStarted","Data":"f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab"} Nov 25 08:29:29 crc kubenswrapper[4760]: I1125 08:29:29.700145 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerStarted","Data":"de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198"} Nov 25 08:29:30 crc kubenswrapper[4760]: I1125 08:29:30.711344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerStarted","Data":"555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610"} Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.550025 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b7dd9bf58-zdxgq" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.550480 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.723062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerStarted","Data":"a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7"} Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.723214 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-central-agent" containerID="cri-o://f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab" gracePeriod=30 Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.723273 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="sg-core" containerID="cri-o://555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610" gracePeriod=30 Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.723230 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.723295 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-notification-agent" containerID="cri-o://de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198" gracePeriod=30 Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.723259 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="proxy-httpd" containerID="cri-o://a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7" gracePeriod=30 Nov 25 08:29:31 crc kubenswrapper[4760]: I1125 08:29:31.746735 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.346376255 podStartE2EDuration="6.746715033s" podCreationTimestamp="2025-11-25 08:29:25 +0000 UTC" firstStartedPulling="2025-11-25 08:29:26.486188044 +0000 UTC m=+1100.195218839" lastFinishedPulling="2025-11-25 08:29:30.886526812 +0000 UTC m=+1104.595557617" observedRunningTime="2025-11-25 08:29:31.743061668 +0000 UTC m=+1105.452092473" watchObservedRunningTime="2025-11-25 08:29:31.746715033 +0000 UTC m=+1105.455745828" Nov 25 08:29:31 crc kubenswrapper[4760]: E1125 08:29:31.979890 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5160fd6c_8a30_472c_a5f3_67e4f8bf90f8.slice/crio-a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499ed65_d46c_4e61_b113_06350f33838c.slice/crio-0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377\": RecentStats: unable to find data in memory cache]" Nov 25 08:29:32 crc kubenswrapper[4760]: I1125 08:29:32.733557 4760 generic.go:334] "Generic (PLEG): container finished" podID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerID="a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7" exitCode=0 Nov 25 08:29:32 crc kubenswrapper[4760]: I1125 08:29:32.733608 4760 generic.go:334] "Generic (PLEG): container finished" podID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerID="555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610" exitCode=2 Nov 25 08:29:32 crc kubenswrapper[4760]: I1125 08:29:32.733622 4760 generic.go:334] "Generic (PLEG): container finished" podID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerID="de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198" exitCode=0 Nov 25 08:29:32 crc kubenswrapper[4760]: I1125 08:29:32.733653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerDied","Data":"a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7"} Nov 25 08:29:32 crc kubenswrapper[4760]: I1125 08:29:32.733694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerDied","Data":"555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610"} Nov 25 08:29:32 crc kubenswrapper[4760]: I1125 08:29:32.733714 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerDied","Data":"de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198"} Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.622811 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.761692 4760 generic.go:334] "Generic (PLEG): container finished" podID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerID="f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab" exitCode=0 Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.761736 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerDied","Data":"f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab"} Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.761765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8","Type":"ContainerDied","Data":"f06904bbbf0ee636399184fc05decdc1eb11675ea2fccda81f79c7ec4c58f909"} Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.761768 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.761783 4760 scope.go:117] "RemoveContainer" containerID="a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.768209 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-scripts\") pod \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.768422 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-log-httpd\") pod \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.768504 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bvkj\" (UniqueName: \"kubernetes.io/projected/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-kube-api-access-4bvkj\") pod \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.768913 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" (UID: "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.769655 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-sg-core-conf-yaml\") pod \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.770129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-combined-ca-bundle\") pod \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.770179 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-config-data\") pod \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.770273 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-run-httpd\") pod \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\" (UID: \"5160fd6c-8a30-472c-a5f3-67e4f8bf90f8\") " Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.770750 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" (UID: "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.770868 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.770892 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.774180 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-scripts" (OuterVolumeSpecName: "scripts") pod "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" (UID: "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.774948 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-kube-api-access-4bvkj" (OuterVolumeSpecName: "kube-api-access-4bvkj") pod "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" (UID: "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8"). InnerVolumeSpecName "kube-api-access-4bvkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.784296 4760 scope.go:117] "RemoveContainer" containerID="555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.797173 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" (UID: "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.838994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" (UID: "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.866466 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-config-data" (OuterVolumeSpecName: "config-data") pod "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" (UID: "5160fd6c-8a30-472c-a5f3-67e4f8bf90f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.872843 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bvkj\" (UniqueName: \"kubernetes.io/projected/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-kube-api-access-4bvkj\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.872868 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.872881 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.872890 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.872898 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.894471 4760 scope.go:117] "RemoveContainer" containerID="de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.916030 4760 scope.go:117] "RemoveContainer" containerID="f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.934790 4760 scope.go:117] "RemoveContainer" containerID="a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7" Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.935368 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7\": container with ID starting with a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7 not found: ID does not exist" containerID="a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.935437 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7"} err="failed to get container status \"a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7\": rpc error: code = NotFound desc = could not find container \"a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7\": container with ID starting with a9948d49c9a4f73dbac4bcec9bc394e2102c7980bfc6d8b518952749205cd5f7 not found: ID does not exist" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.935481 4760 scope.go:117] "RemoveContainer" containerID="555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610" Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.936017 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610\": container with ID starting with 555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610 not found: ID does not exist" containerID="555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.936058 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610"} err="failed to get container status \"555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610\": rpc error: code = NotFound desc = could not find container \"555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610\": container with ID starting with 555b9201a6f3fc10800bb82ab4360956cf075f3e1b661df9b9f8dcf164073610 not found: ID does not exist" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.936086 4760 scope.go:117] "RemoveContainer" containerID="de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198" Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.936473 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198\": container with ID starting with de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198 not found: ID does not exist" containerID="de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.936502 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198"} err="failed to get container status \"de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198\": rpc error: code = NotFound desc = could not find container \"de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198\": container with ID starting with de7b68f70fefb0ad431e8bb3dcbd99fc2d5ab7b6f0def8256561e66c1fe53198 not found: ID does not exist" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.936518 4760 scope.go:117] "RemoveContainer" containerID="f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab" Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.936763 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab\": container with ID starting with f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab not found: ID does not exist" containerID="f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.936793 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab"} err="failed to get container status \"f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab\": rpc error: code = NotFound desc = could not find container \"f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab\": container with ID starting with f770bb8e1ff8a34238a0c8801faa9a5c2ed56abae2468c063f7f0f622a0cfdab not found: ID does not exist" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.998960 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2zt27"] Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.999310 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-notification-agent" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999325 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-notification-agent" Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.999347 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="proxy-httpd" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999354 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="proxy-httpd" Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.999366 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="sg-core" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999372 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="sg-core" Nov 25 08:29:34 crc kubenswrapper[4760]: E1125 08:29:34.999394 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-central-agent" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999400 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-central-agent" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999547 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="proxy-httpd" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999559 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-central-agent" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999581 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="ceilometer-notification-agent" Nov 25 08:29:34 crc kubenswrapper[4760]: I1125 08:29:34.999596 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" containerName="sg-core" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.000073 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.007612 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2zt27"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.087986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vnzj\" (UniqueName: \"kubernetes.io/projected/0993c794-4a24-476a-b473-ea84948835cd-kube-api-access-4vnzj\") pod \"nova-api-db-create-2zt27\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.088453 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0993c794-4a24-476a-b473-ea84948835cd-operator-scripts\") pod \"nova-api-db-create-2zt27\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.107137 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.113997 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.123304 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mkm9v"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.124750 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.140293 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mkm9v"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.151317 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.153967 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.157568 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.157606 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.169727 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0993c794-4a24-476a-b473-ea84948835cd-operator-scripts\") pod \"nova-api-db-create-2zt27\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192695 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24ba00a9-0675-4154-8db7-a3dec9528ce1-operator-scripts\") pod \"nova-cell0-db-create-mkm9v\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192733 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-run-httpd\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vnzj\" (UniqueName: \"kubernetes.io/projected/0993c794-4a24-476a-b473-ea84948835cd-kube-api-access-4vnzj\") pod \"nova-api-db-create-2zt27\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192827 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-log-httpd\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192855 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf85p\" (UniqueName: \"kubernetes.io/projected/24ba00a9-0675-4154-8db7-a3dec9528ce1-kube-api-access-bf85p\") pod \"nova-cell0-db-create-mkm9v\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192877 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-config-data\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192900 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77kqm\" (UniqueName: \"kubernetes.io/projected/33f356b2-7c4e-4ce6-86d5-a6771ef86271-kube-api-access-77kqm\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192948 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-scripts\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.192965 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.193940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0993c794-4a24-476a-b473-ea84948835cd-operator-scripts\") pod \"nova-api-db-create-2zt27\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.202550 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-388b-account-create-h5j6g"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.203706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.206362 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.224809 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-388b-account-create-h5j6g"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.229180 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vnzj\" (UniqueName: \"kubernetes.io/projected/0993c794-4a24-476a-b473-ea84948835cd-kube-api-access-4vnzj\") pod \"nova-api-db-create-2zt27\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.295827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77kqm\" (UniqueName: \"kubernetes.io/projected/33f356b2-7c4e-4ce6-86d5-a6771ef86271-kube-api-access-77kqm\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.295908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637b4ab8-7e6b-4068-993c-5dc8f5975b93-operator-scripts\") pod \"nova-api-388b-account-create-h5j6g\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.295973 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-scripts\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24ba00a9-0675-4154-8db7-a3dec9528ce1-operator-scripts\") pod \"nova-cell0-db-create-mkm9v\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-run-httpd\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-log-httpd\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrqmv\" (UniqueName: \"kubernetes.io/projected/637b4ab8-7e6b-4068-993c-5dc8f5975b93-kube-api-access-vrqmv\") pod \"nova-api-388b-account-create-h5j6g\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296285 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf85p\" (UniqueName: \"kubernetes.io/projected/24ba00a9-0675-4154-8db7-a3dec9528ce1-kube-api-access-bf85p\") pod \"nova-cell0-db-create-mkm9v\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-config-data\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.296998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24ba00a9-0675-4154-8db7-a3dec9528ce1-operator-scripts\") pod \"nova-cell0-db-create-mkm9v\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.297105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-log-httpd\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.297918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-run-httpd\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.303893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-scripts\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.303889 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-skw7b"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.305197 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.306953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.307933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-config-data\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.321753 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.322158 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.326084 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77kqm\" (UniqueName: \"kubernetes.io/projected/33f356b2-7c4e-4ce6-86d5-a6771ef86271-kube-api-access-77kqm\") pod \"ceilometer-0\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.327785 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf85p\" (UniqueName: \"kubernetes.io/projected/24ba00a9-0675-4154-8db7-a3dec9528ce1-kube-api-access-bf85p\") pod \"nova-cell0-db-create-mkm9v\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.332387 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-skw7b"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.397937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637b4ab8-7e6b-4068-993c-5dc8f5975b93-operator-scripts\") pod \"nova-api-388b-account-create-h5j6g\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.398078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5txl\" (UniqueName: \"kubernetes.io/projected/4ff4c392-598d-40ec-8803-d97ca2429c37-kube-api-access-n5txl\") pod \"nova-cell1-db-create-skw7b\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.398127 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ff4c392-598d-40ec-8803-d97ca2429c37-operator-scripts\") pod \"nova-cell1-db-create-skw7b\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.398187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrqmv\" (UniqueName: \"kubernetes.io/projected/637b4ab8-7e6b-4068-993c-5dc8f5975b93-kube-api-access-vrqmv\") pod \"nova-api-388b-account-create-h5j6g\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.399410 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637b4ab8-7e6b-4068-993c-5dc8f5975b93-operator-scripts\") pod \"nova-api-388b-account-create-h5j6g\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.415066 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d8c3-account-create-9hlqt"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.419376 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.421715 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.423088 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrqmv\" (UniqueName: \"kubernetes.io/projected/637b4ab8-7e6b-4068-993c-5dc8f5975b93-kube-api-access-vrqmv\") pod \"nova-api-388b-account-create-h5j6g\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.427533 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d8c3-account-create-9hlqt"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.452296 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.478872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.499996 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8065060f-1c06-4186-8a41-e864d9256d7b-operator-scripts\") pod \"nova-cell0-d8c3-account-create-9hlqt\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.500134 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5txl\" (UniqueName: \"kubernetes.io/projected/4ff4c392-598d-40ec-8803-d97ca2429c37-kube-api-access-n5txl\") pod \"nova-cell1-db-create-skw7b\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.500185 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ff4c392-598d-40ec-8803-d97ca2429c37-operator-scripts\") pod \"nova-cell1-db-create-skw7b\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.500287 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxjz\" (UniqueName: \"kubernetes.io/projected/8065060f-1c06-4186-8a41-e864d9256d7b-kube-api-access-hfxjz\") pod \"nova-cell0-d8c3-account-create-9hlqt\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.502604 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ff4c392-598d-40ec-8803-d97ca2429c37-operator-scripts\") pod \"nova-cell1-db-create-skw7b\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.517977 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.519808 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5txl\" (UniqueName: \"kubernetes.io/projected/4ff4c392-598d-40ec-8803-d97ca2429c37-kube-api-access-n5txl\") pod \"nova-cell1-db-create-skw7b\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.530444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.605412 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxjz\" (UniqueName: \"kubernetes.io/projected/8065060f-1c06-4186-8a41-e864d9256d7b-kube-api-access-hfxjz\") pod \"nova-cell0-d8c3-account-create-9hlqt\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.605468 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8065060f-1c06-4186-8a41-e864d9256d7b-operator-scripts\") pod \"nova-cell0-d8c3-account-create-9hlqt\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.606139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8065060f-1c06-4186-8a41-e864d9256d7b-operator-scripts\") pod \"nova-cell0-d8c3-account-create-9hlqt\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.613979 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bb66-account-create-r9w4q"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.615465 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.618118 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.628888 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bb66-account-create-r9w4q"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.634315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxjz\" (UniqueName: \"kubernetes.io/projected/8065060f-1c06-4186-8a41-e864d9256d7b-kube-api-access-hfxjz\") pod \"nova-cell0-d8c3-account-create-9hlqt\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.707852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbm8\" (UniqueName: \"kubernetes.io/projected/fee850d3-ea88-45ef-9a47-56cfe91d2c36-kube-api-access-8hbm8\") pod \"nova-cell1-bb66-account-create-r9w4q\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.707910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee850d3-ea88-45ef-9a47-56cfe91d2c36-operator-scripts\") pod \"nova-cell1-bb66-account-create-r9w4q\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.810007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hbm8\" (UniqueName: \"kubernetes.io/projected/fee850d3-ea88-45ef-9a47-56cfe91d2c36-kube-api-access-8hbm8\") pod \"nova-cell1-bb66-account-create-r9w4q\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.810084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee850d3-ea88-45ef-9a47-56cfe91d2c36-operator-scripts\") pod \"nova-cell1-bb66-account-create-r9w4q\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.813380 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee850d3-ea88-45ef-9a47-56cfe91d2c36-operator-scripts\") pod \"nova-cell1-bb66-account-create-r9w4q\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.830813 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hbm8\" (UniqueName: \"kubernetes.io/projected/fee850d3-ea88-45ef-9a47-56cfe91d2c36-kube-api-access-8hbm8\") pod \"nova-cell1-bb66-account-create-r9w4q\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.845195 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.909462 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2zt27"] Nov 25 08:29:35 crc kubenswrapper[4760]: I1125 08:29:35.953622 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.119050 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:36 crc kubenswrapper[4760]: W1125 08:29:36.130963 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod637b4ab8_7e6b_4068_993c_5dc8f5975b93.slice/crio-449872ea43e20080f4684fdc77b27b0c6ae52419a4acfc6b9c93e1cbbc766432 WatchSource:0}: Error finding container 449872ea43e20080f4684fdc77b27b0c6ae52419a4acfc6b9c93e1cbbc766432: Status 404 returned error can't find the container with id 449872ea43e20080f4684fdc77b27b0c6ae52419a4acfc6b9c93e1cbbc766432 Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.131751 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mkm9v"] Nov 25 08:29:36 crc kubenswrapper[4760]: W1125 08:29:36.136057 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f356b2_7c4e_4ce6_86d5_a6771ef86271.slice/crio-c04791adba158872dde7161f84204b784524069f11893ed0eb5234fe7d47cbff WatchSource:0}: Error finding container c04791adba158872dde7161f84204b784524069f11893ed0eb5234fe7d47cbff: Status 404 returned error can't find the container with id c04791adba158872dde7161f84204b784524069f11893ed0eb5234fe7d47cbff Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.140144 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-388b-account-create-h5j6g"] Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.270926 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-skw7b"] Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.378739 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d8c3-account-create-9hlqt"] Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.500670 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bb66-account-create-r9w4q"] Nov 25 08:29:36 crc kubenswrapper[4760]: W1125 08:29:36.511127 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfee850d3_ea88_45ef_9a47_56cfe91d2c36.slice/crio-136445e35825d0d08509d090d1ff15eb2c52bb6204186408cee9176f5182e8fa WatchSource:0}: Error finding container 136445e35825d0d08509d090d1ff15eb2c52bb6204186408cee9176f5182e8fa: Status 404 returned error can't find the container with id 136445e35825d0d08509d090d1ff15eb2c52bb6204186408cee9176f5182e8fa Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.809313 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.812176 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d8c3-account-create-9hlqt" event={"ID":"8065060f-1c06-4186-8a41-e864d9256d7b","Type":"ContainerStarted","Data":"508a836bd639d421425fd20152149e632b1ac0e5e967315a67a8701e6dad9328"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.840210 4760 generic.go:334] "Generic (PLEG): container finished" podID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerID="d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32" exitCode=137 Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.840326 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7dd9bf58-zdxgq" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.840415 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7dd9bf58-zdxgq" event={"ID":"fed86ba5-c330-411e-bab0-88e86ceb8980","Type":"ContainerDied","Data":"d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.840445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7dd9bf58-zdxgq" event={"ID":"fed86ba5-c330-411e-bab0-88e86ceb8980","Type":"ContainerDied","Data":"972880981c9a6c24e0cd0bc733a9a2b6616e2443144d6f2acdd0559e2010370c"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.840489 4760 scope.go:117] "RemoveContainer" containerID="fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.854848 4760 generic.go:334] "Generic (PLEG): container finished" podID="637b4ab8-7e6b-4068-993c-5dc8f5975b93" containerID="bc4bdd4adfc52a3a2f44d4963b4aa3c3062ed598f9f8cc44350bafca0ccdfe2a" exitCode=0 Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.854952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-388b-account-create-h5j6g" event={"ID":"637b4ab8-7e6b-4068-993c-5dc8f5975b93","Type":"ContainerDied","Data":"bc4bdd4adfc52a3a2f44d4963b4aa3c3062ed598f9f8cc44350bafca0ccdfe2a"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.854985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-388b-account-create-h5j6g" event={"ID":"637b4ab8-7e6b-4068-993c-5dc8f5975b93","Type":"ContainerStarted","Data":"449872ea43e20080f4684fdc77b27b0c6ae52419a4acfc6b9c93e1cbbc766432"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.861310 4760 generic.go:334] "Generic (PLEG): container finished" podID="0993c794-4a24-476a-b473-ea84948835cd" containerID="b7a405f44808bc3841f17df9cd22edc34afc8f2c2797e3cde506423b5dd0b306" exitCode=0 Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.861446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2zt27" event={"ID":"0993c794-4a24-476a-b473-ea84948835cd","Type":"ContainerDied","Data":"b7a405f44808bc3841f17df9cd22edc34afc8f2c2797e3cde506423b5dd0b306"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.861480 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2zt27" event={"ID":"0993c794-4a24-476a-b473-ea84948835cd","Type":"ContainerStarted","Data":"e09d363001a9006fc9227016164d9f68073fe53e9322b61c16730c2b9c6191be"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.863120 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerStarted","Data":"c04791adba158872dde7161f84204b784524069f11893ed0eb5234fe7d47cbff"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.873363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb66-account-create-r9w4q" event={"ID":"fee850d3-ea88-45ef-9a47-56cfe91d2c36","Type":"ContainerStarted","Data":"136445e35825d0d08509d090d1ff15eb2c52bb6204186408cee9176f5182e8fa"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.875426 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-skw7b" event={"ID":"4ff4c392-598d-40ec-8803-d97ca2429c37","Type":"ContainerStarted","Data":"93326a2e47b83f14d5a2a50bb4599d8ae0e5b693dd0fc51ed599135f9eafdab2"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.885165 4760 generic.go:334] "Generic (PLEG): container finished" podID="24ba00a9-0675-4154-8db7-a3dec9528ce1" containerID="3e6b0169a360a96744b553fc190d26554e8e2264d7eb67351fe738196ade51bb" exitCode=0 Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.885210 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mkm9v" event={"ID":"24ba00a9-0675-4154-8db7-a3dec9528ce1","Type":"ContainerDied","Data":"3e6b0169a360a96744b553fc190d26554e8e2264d7eb67351fe738196ade51bb"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.885235 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mkm9v" event={"ID":"24ba00a9-0675-4154-8db7-a3dec9528ce1","Type":"ContainerStarted","Data":"8d8a7d46489f88782d0be66a62945d3ddcc3c3d8cf6c5cd94279862b5939395f"} Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.933160 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed86ba5-c330-411e-bab0-88e86ceb8980-logs\") pod \"fed86ba5-c330-411e-bab0-88e86ceb8980\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.933260 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-scripts\") pod \"fed86ba5-c330-411e-bab0-88e86ceb8980\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.933297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-config-data\") pod \"fed86ba5-c330-411e-bab0-88e86ceb8980\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.933440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-secret-key\") pod \"fed86ba5-c330-411e-bab0-88e86ceb8980\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.933484 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-combined-ca-bundle\") pod \"fed86ba5-c330-411e-bab0-88e86ceb8980\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.933523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-tls-certs\") pod \"fed86ba5-c330-411e-bab0-88e86ceb8980\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.933551 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmfn\" (UniqueName: \"kubernetes.io/projected/fed86ba5-c330-411e-bab0-88e86ceb8980-kube-api-access-mnmfn\") pod \"fed86ba5-c330-411e-bab0-88e86ceb8980\" (UID: \"fed86ba5-c330-411e-bab0-88e86ceb8980\") " Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.961453 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fed86ba5-c330-411e-bab0-88e86ceb8980-logs" (OuterVolumeSpecName: "logs") pod "fed86ba5-c330-411e-bab0-88e86ceb8980" (UID: "fed86ba5-c330-411e-bab0-88e86ceb8980"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.968409 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5160fd6c-8a30-472c-a5f3-67e4f8bf90f8" path="/var/lib/kubelet/pods/5160fd6c-8a30-472c-a5f3-67e4f8bf90f8/volumes" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.972015 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fed86ba5-c330-411e-bab0-88e86ceb8980" (UID: "fed86ba5-c330-411e-bab0-88e86ceb8980"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.985031 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed86ba5-c330-411e-bab0-88e86ceb8980-kube-api-access-mnmfn" (OuterVolumeSpecName: "kube-api-access-mnmfn") pod "fed86ba5-c330-411e-bab0-88e86ceb8980" (UID: "fed86ba5-c330-411e-bab0-88e86ceb8980"). InnerVolumeSpecName "kube-api-access-mnmfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:36 crc kubenswrapper[4760]: I1125 08:29:36.990929 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fed86ba5-c330-411e-bab0-88e86ceb8980" (UID: "fed86ba5-c330-411e-bab0-88e86ceb8980"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.006950 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-scripts" (OuterVolumeSpecName: "scripts") pod "fed86ba5-c330-411e-bab0-88e86ceb8980" (UID: "fed86ba5-c330-411e-bab0-88e86ceb8980"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.007583 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-config-data" (OuterVolumeSpecName: "config-data") pod "fed86ba5-c330-411e-bab0-88e86ceb8980" (UID: "fed86ba5-c330-411e-bab0-88e86ceb8980"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.037481 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.037538 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.037549 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnmfn\" (UniqueName: \"kubernetes.io/projected/fed86ba5-c330-411e-bab0-88e86ceb8980-kube-api-access-mnmfn\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.037561 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed86ba5-c330-411e-bab0-88e86ceb8980-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.037570 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.037579 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fed86ba5-c330-411e-bab0-88e86ceb8980-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.049677 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "fed86ba5-c330-411e-bab0-88e86ceb8980" (UID: "fed86ba5-c330-411e-bab0-88e86ceb8980"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.117416 4760 scope.go:117] "RemoveContainer" containerID="d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.140447 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed86ba5-c330-411e-bab0-88e86ceb8980-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.143826 4760 scope.go:117] "RemoveContainer" containerID="fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d" Nov 25 08:29:37 crc kubenswrapper[4760]: E1125 08:29:37.144626 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d\": container with ID starting with fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d not found: ID does not exist" containerID="fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.144666 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d"} err="failed to get container status \"fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d\": rpc error: code = NotFound desc = could not find container \"fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d\": container with ID starting with fad05662ca4165b7647b5792ae3b655db1c85216c664e59be0b4c83660c26d7d not found: ID does not exist" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.144698 4760 scope.go:117] "RemoveContainer" containerID="d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32" Nov 25 08:29:37 crc kubenswrapper[4760]: E1125 08:29:37.145876 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32\": container with ID starting with d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32 not found: ID does not exist" containerID="d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.145906 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32"} err="failed to get container status \"d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32\": rpc error: code = NotFound desc = could not find container \"d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32\": container with ID starting with d2c06dc800b5f81dd4cd66f3dc2d507ac0c1a6672a333c0833f0a1729aeeed32 not found: ID does not exist" Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.180180 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b7dd9bf58-zdxgq"] Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.194369 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b7dd9bf58-zdxgq"] Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.897029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerStarted","Data":"39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8"} Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.899387 4760 generic.go:334] "Generic (PLEG): container finished" podID="fee850d3-ea88-45ef-9a47-56cfe91d2c36" containerID="02ab8fb2b82832f3f33f5094dbdcde15c49e3b4e13d95978d9fac864ecb65acb" exitCode=0 Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.899478 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb66-account-create-r9w4q" event={"ID":"fee850d3-ea88-45ef-9a47-56cfe91d2c36","Type":"ContainerDied","Data":"02ab8fb2b82832f3f33f5094dbdcde15c49e3b4e13d95978d9fac864ecb65acb"} Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.902418 4760 generic.go:334] "Generic (PLEG): container finished" podID="4ff4c392-598d-40ec-8803-d97ca2429c37" containerID="28841002acc8c9cc96afadf2043270921dd3c06c81a4d376e463e05c33d9208b" exitCode=0 Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.902664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-skw7b" event={"ID":"4ff4c392-598d-40ec-8803-d97ca2429c37","Type":"ContainerDied","Data":"28841002acc8c9cc96afadf2043270921dd3c06c81a4d376e463e05c33d9208b"} Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.905650 4760 generic.go:334] "Generic (PLEG): container finished" podID="8065060f-1c06-4186-8a41-e864d9256d7b" containerID="00e9f2f7a936579eb5e2e1aace18d53d59e3e30971949b53140b71e564714482" exitCode=0 Nov 25 08:29:37 crc kubenswrapper[4760]: I1125 08:29:37.905744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d8c3-account-create-9hlqt" event={"ID":"8065060f-1c06-4186-8a41-e864d9256d7b","Type":"ContainerDied","Data":"00e9f2f7a936579eb5e2e1aace18d53d59e3e30971949b53140b71e564714482"} Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.367126 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.462292 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf85p\" (UniqueName: \"kubernetes.io/projected/24ba00a9-0675-4154-8db7-a3dec9528ce1-kube-api-access-bf85p\") pod \"24ba00a9-0675-4154-8db7-a3dec9528ce1\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.462329 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24ba00a9-0675-4154-8db7-a3dec9528ce1-operator-scripts\") pod \"24ba00a9-0675-4154-8db7-a3dec9528ce1\" (UID: \"24ba00a9-0675-4154-8db7-a3dec9528ce1\") " Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.463340 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ba00a9-0675-4154-8db7-a3dec9528ce1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24ba00a9-0675-4154-8db7-a3dec9528ce1" (UID: "24ba00a9-0675-4154-8db7-a3dec9528ce1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.468486 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ba00a9-0675-4154-8db7-a3dec9528ce1-kube-api-access-bf85p" (OuterVolumeSpecName: "kube-api-access-bf85p") pod "24ba00a9-0675-4154-8db7-a3dec9528ce1" (UID: "24ba00a9-0675-4154-8db7-a3dec9528ce1"). InnerVolumeSpecName "kube-api-access-bf85p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.528778 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.538759 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.563651 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0993c794-4a24-476a-b473-ea84948835cd-operator-scripts\") pod \"0993c794-4a24-476a-b473-ea84948835cd\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.563914 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vnzj\" (UniqueName: \"kubernetes.io/projected/0993c794-4a24-476a-b473-ea84948835cd-kube-api-access-4vnzj\") pod \"0993c794-4a24-476a-b473-ea84948835cd\" (UID: \"0993c794-4a24-476a-b473-ea84948835cd\") " Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.564393 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf85p\" (UniqueName: \"kubernetes.io/projected/24ba00a9-0675-4154-8db7-a3dec9528ce1-kube-api-access-bf85p\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.564419 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24ba00a9-0675-4154-8db7-a3dec9528ce1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.564626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0993c794-4a24-476a-b473-ea84948835cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0993c794-4a24-476a-b473-ea84948835cd" (UID: "0993c794-4a24-476a-b473-ea84948835cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.570378 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0993c794-4a24-476a-b473-ea84948835cd-kube-api-access-4vnzj" (OuterVolumeSpecName: "kube-api-access-4vnzj") pod "0993c794-4a24-476a-b473-ea84948835cd" (UID: "0993c794-4a24-476a-b473-ea84948835cd"). InnerVolumeSpecName "kube-api-access-4vnzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.665780 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637b4ab8-7e6b-4068-993c-5dc8f5975b93-operator-scripts\") pod \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.666125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrqmv\" (UniqueName: \"kubernetes.io/projected/637b4ab8-7e6b-4068-993c-5dc8f5975b93-kube-api-access-vrqmv\") pod \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\" (UID: \"637b4ab8-7e6b-4068-993c-5dc8f5975b93\") " Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.666262 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/637b4ab8-7e6b-4068-993c-5dc8f5975b93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "637b4ab8-7e6b-4068-993c-5dc8f5975b93" (UID: "637b4ab8-7e6b-4068-993c-5dc8f5975b93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.666869 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vnzj\" (UniqueName: \"kubernetes.io/projected/0993c794-4a24-476a-b473-ea84948835cd-kube-api-access-4vnzj\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.666907 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0993c794-4a24-476a-b473-ea84948835cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.666930 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/637b4ab8-7e6b-4068-993c-5dc8f5975b93-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.669740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637b4ab8-7e6b-4068-993c-5dc8f5975b93-kube-api-access-vrqmv" (OuterVolumeSpecName: "kube-api-access-vrqmv") pod "637b4ab8-7e6b-4068-993c-5dc8f5975b93" (UID: "637b4ab8-7e6b-4068-993c-5dc8f5975b93"). InnerVolumeSpecName "kube-api-access-vrqmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.769047 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrqmv\" (UniqueName: \"kubernetes.io/projected/637b4ab8-7e6b-4068-993c-5dc8f5975b93-kube-api-access-vrqmv\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.920940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerStarted","Data":"1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965"} Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.920991 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerStarted","Data":"801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20"} Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.923747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mkm9v" event={"ID":"24ba00a9-0675-4154-8db7-a3dec9528ce1","Type":"ContainerDied","Data":"8d8a7d46489f88782d0be66a62945d3ddcc3c3d8cf6c5cd94279862b5939395f"} Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.923781 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d8a7d46489f88782d0be66a62945d3ddcc3c3d8cf6c5cd94279862b5939395f" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.923788 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mkm9v" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.925962 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-388b-account-create-h5j6g" event={"ID":"637b4ab8-7e6b-4068-993c-5dc8f5975b93","Type":"ContainerDied","Data":"449872ea43e20080f4684fdc77b27b0c6ae52419a4acfc6b9c93e1cbbc766432"} Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.925983 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="449872ea43e20080f4684fdc77b27b0c6ae52419a4acfc6b9c93e1cbbc766432" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.925986 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-388b-account-create-h5j6g" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.935061 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2zt27" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.936044 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2zt27" event={"ID":"0993c794-4a24-476a-b473-ea84948835cd","Type":"ContainerDied","Data":"e09d363001a9006fc9227016164d9f68073fe53e9322b61c16730c2b9c6191be"} Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.936091 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09d363001a9006fc9227016164d9f68073fe53e9322b61c16730c2b9c6191be" Nov 25 08:29:38 crc kubenswrapper[4760]: I1125 08:29:38.976522 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" path="/var/lib/kubelet/pods/fed86ba5-c330-411e-bab0-88e86ceb8980/volumes" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.535846 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.542799 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.549652 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.592284 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hbm8\" (UniqueName: \"kubernetes.io/projected/fee850d3-ea88-45ef-9a47-56cfe91d2c36-kube-api-access-8hbm8\") pod \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.592367 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee850d3-ea88-45ef-9a47-56cfe91d2c36-operator-scripts\") pod \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\" (UID: \"fee850d3-ea88-45ef-9a47-56cfe91d2c36\") " Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.592420 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8065060f-1c06-4186-8a41-e864d9256d7b-operator-scripts\") pod \"8065060f-1c06-4186-8a41-e864d9256d7b\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.592451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ff4c392-598d-40ec-8803-d97ca2429c37-operator-scripts\") pod \"4ff4c392-598d-40ec-8803-d97ca2429c37\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.592540 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfxjz\" (UniqueName: \"kubernetes.io/projected/8065060f-1c06-4186-8a41-e864d9256d7b-kube-api-access-hfxjz\") pod \"8065060f-1c06-4186-8a41-e864d9256d7b\" (UID: \"8065060f-1c06-4186-8a41-e864d9256d7b\") " Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.592575 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5txl\" (UniqueName: \"kubernetes.io/projected/4ff4c392-598d-40ec-8803-d97ca2429c37-kube-api-access-n5txl\") pod \"4ff4c392-598d-40ec-8803-d97ca2429c37\" (UID: \"4ff4c392-598d-40ec-8803-d97ca2429c37\") " Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.595436 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee850d3-ea88-45ef-9a47-56cfe91d2c36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fee850d3-ea88-45ef-9a47-56cfe91d2c36" (UID: "fee850d3-ea88-45ef-9a47-56cfe91d2c36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.596474 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff4c392-598d-40ec-8803-d97ca2429c37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ff4c392-598d-40ec-8803-d97ca2429c37" (UID: "4ff4c392-598d-40ec-8803-d97ca2429c37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.596841 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8065060f-1c06-4186-8a41-e864d9256d7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8065060f-1c06-4186-8a41-e864d9256d7b" (UID: "8065060f-1c06-4186-8a41-e864d9256d7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.605485 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8065060f-1c06-4186-8a41-e864d9256d7b-kube-api-access-hfxjz" (OuterVolumeSpecName: "kube-api-access-hfxjz") pod "8065060f-1c06-4186-8a41-e864d9256d7b" (UID: "8065060f-1c06-4186-8a41-e864d9256d7b"). InnerVolumeSpecName "kube-api-access-hfxjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.605559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff4c392-598d-40ec-8803-d97ca2429c37-kube-api-access-n5txl" (OuterVolumeSpecName: "kube-api-access-n5txl") pod "4ff4c392-598d-40ec-8803-d97ca2429c37" (UID: "4ff4c392-598d-40ec-8803-d97ca2429c37"). InnerVolumeSpecName "kube-api-access-n5txl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.605584 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee850d3-ea88-45ef-9a47-56cfe91d2c36-kube-api-access-8hbm8" (OuterVolumeSpecName: "kube-api-access-8hbm8") pod "fee850d3-ea88-45ef-9a47-56cfe91d2c36" (UID: "fee850d3-ea88-45ef-9a47-56cfe91d2c36"). InnerVolumeSpecName "kube-api-access-8hbm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.694831 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfxjz\" (UniqueName: \"kubernetes.io/projected/8065060f-1c06-4186-8a41-e864d9256d7b-kube-api-access-hfxjz\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.694865 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5txl\" (UniqueName: \"kubernetes.io/projected/4ff4c392-598d-40ec-8803-d97ca2429c37-kube-api-access-n5txl\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.694875 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hbm8\" (UniqueName: \"kubernetes.io/projected/fee850d3-ea88-45ef-9a47-56cfe91d2c36-kube-api-access-8hbm8\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.694886 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee850d3-ea88-45ef-9a47-56cfe91d2c36-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.694895 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8065060f-1c06-4186-8a41-e864d9256d7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.694903 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ff4c392-598d-40ec-8803-d97ca2429c37-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.946631 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bb66-account-create-r9w4q" event={"ID":"fee850d3-ea88-45ef-9a47-56cfe91d2c36","Type":"ContainerDied","Data":"136445e35825d0d08509d090d1ff15eb2c52bb6204186408cee9176f5182e8fa"} Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.946668 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="136445e35825d0d08509d090d1ff15eb2c52bb6204186408cee9176f5182e8fa" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.946710 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bb66-account-create-r9w4q" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.952354 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-skw7b" event={"ID":"4ff4c392-598d-40ec-8803-d97ca2429c37","Type":"ContainerDied","Data":"93326a2e47b83f14d5a2a50bb4599d8ae0e5b693dd0fc51ed599135f9eafdab2"} Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.952386 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93326a2e47b83f14d5a2a50bb4599d8ae0e5b693dd0fc51ed599135f9eafdab2" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.952657 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-skw7b" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.954000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d8c3-account-create-9hlqt" event={"ID":"8065060f-1c06-4186-8a41-e864d9256d7b","Type":"ContainerDied","Data":"508a836bd639d421425fd20152149e632b1ac0e5e967315a67a8701e6dad9328"} Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.954025 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508a836bd639d421425fd20152149e632b1ac0e5e967315a67a8701e6dad9328" Nov 25 08:29:39 crc kubenswrapper[4760]: I1125 08:29:39.954098 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d8c3-account-create-9hlqt" Nov 25 08:29:40 crc kubenswrapper[4760]: I1125 08:29:40.472174 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:40 crc kubenswrapper[4760]: I1125 08:29:40.968539 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerStarted","Data":"54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa"} Nov 25 08:29:40 crc kubenswrapper[4760]: I1125 08:29:40.968703 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:29:41 crc kubenswrapper[4760]: I1125 08:29:41.001555 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.957393357 podStartE2EDuration="6.001532912s" podCreationTimestamp="2025-11-25 08:29:35 +0000 UTC" firstStartedPulling="2025-11-25 08:29:36.156507414 +0000 UTC m=+1109.865538209" lastFinishedPulling="2025-11-25 08:29:40.200646969 +0000 UTC m=+1113.909677764" observedRunningTime="2025-11-25 08:29:40.992685427 +0000 UTC m=+1114.701716222" watchObservedRunningTime="2025-11-25 08:29:41.001532912 +0000 UTC m=+1114.710563717" Nov 25 08:29:41 crc kubenswrapper[4760]: I1125 08:29:41.976827 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-central-agent" containerID="cri-o://39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8" gracePeriod=30 Nov 25 08:29:41 crc kubenswrapper[4760]: I1125 08:29:41.977277 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="proxy-httpd" containerID="cri-o://54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa" gracePeriod=30 Nov 25 08:29:41 crc kubenswrapper[4760]: I1125 08:29:41.977358 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-notification-agent" containerID="cri-o://801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20" gracePeriod=30 Nov 25 08:29:41 crc kubenswrapper[4760]: I1125 08:29:41.977367 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="sg-core" containerID="cri-o://1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965" gracePeriod=30 Nov 25 08:29:42 crc kubenswrapper[4760]: E1125 08:29:42.217799 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f356b2_7c4e_4ce6_86d5_a6771ef86271.slice/crio-conmon-54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f356b2_7c4e_4ce6_86d5_a6771ef86271.slice/crio-54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499ed65_d46c_4e61_b113_06350f33838c.slice/crio-0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377\": RecentStats: unable to find data in memory cache]" Nov 25 08:29:42 crc kubenswrapper[4760]: I1125 08:29:42.990998 4760 generic.go:334] "Generic (PLEG): container finished" podID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerID="54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa" exitCode=0 Nov 25 08:29:42 crc kubenswrapper[4760]: I1125 08:29:42.991368 4760 generic.go:334] "Generic (PLEG): container finished" podID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerID="1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965" exitCode=2 Nov 25 08:29:42 crc kubenswrapper[4760]: I1125 08:29:42.991383 4760 generic.go:334] "Generic (PLEG): container finished" podID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerID="801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20" exitCode=0 Nov 25 08:29:42 crc kubenswrapper[4760]: I1125 08:29:42.991224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerDied","Data":"54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa"} Nov 25 08:29:42 crc kubenswrapper[4760]: I1125 08:29:42.991431 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerDied","Data":"1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965"} Nov 25 08:29:42 crc kubenswrapper[4760]: I1125 08:29:42.991451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerDied","Data":"801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20"} Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.834307 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.865195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-sg-core-conf-yaml\") pod \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.865300 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77kqm\" (UniqueName: \"kubernetes.io/projected/33f356b2-7c4e-4ce6-86d5-a6771ef86271-kube-api-access-77kqm\") pod \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.865329 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-scripts\") pod \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.865400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-combined-ca-bundle\") pod \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.865436 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-log-httpd\") pod \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.865505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-run-httpd\") pod \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.865545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-config-data\") pod \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\" (UID: \"33f356b2-7c4e-4ce6-86d5-a6771ef86271\") " Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.866393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33f356b2-7c4e-4ce6-86d5-a6771ef86271" (UID: "33f356b2-7c4e-4ce6-86d5-a6771ef86271"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.867051 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33f356b2-7c4e-4ce6-86d5-a6771ef86271" (UID: "33f356b2-7c4e-4ce6-86d5-a6771ef86271"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.874501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-scripts" (OuterVolumeSpecName: "scripts") pod "33f356b2-7c4e-4ce6-86d5-a6771ef86271" (UID: "33f356b2-7c4e-4ce6-86d5-a6771ef86271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.880497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f356b2-7c4e-4ce6-86d5-a6771ef86271-kube-api-access-77kqm" (OuterVolumeSpecName: "kube-api-access-77kqm") pod "33f356b2-7c4e-4ce6-86d5-a6771ef86271" (UID: "33f356b2-7c4e-4ce6-86d5-a6771ef86271"). InnerVolumeSpecName "kube-api-access-77kqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.898943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33f356b2-7c4e-4ce6-86d5-a6771ef86271" (UID: "33f356b2-7c4e-4ce6-86d5-a6771ef86271"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.941184 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f356b2-7c4e-4ce6-86d5-a6771ef86271" (UID: "33f356b2-7c4e-4ce6-86d5-a6771ef86271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.967814 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.967881 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.967894 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33f356b2-7c4e-4ce6-86d5-a6771ef86271-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.967904 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.967916 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77kqm\" (UniqueName: \"kubernetes.io/projected/33f356b2-7c4e-4ce6-86d5-a6771ef86271-kube-api-access-77kqm\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.967929 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:43 crc kubenswrapper[4760]: I1125 08:29:43.972455 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-config-data" (OuterVolumeSpecName: "config-data") pod "33f356b2-7c4e-4ce6-86d5-a6771ef86271" (UID: "33f356b2-7c4e-4ce6-86d5-a6771ef86271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.002951 4760 generic.go:334] "Generic (PLEG): container finished" podID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerID="39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8" exitCode=0 Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.003012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerDied","Data":"39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8"} Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.003070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33f356b2-7c4e-4ce6-86d5-a6771ef86271","Type":"ContainerDied","Data":"c04791adba158872dde7161f84204b784524069f11893ed0eb5234fe7d47cbff"} Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.003090 4760 scope.go:117] "RemoveContainer" containerID="54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.003093 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.029927 4760 scope.go:117] "RemoveContainer" containerID="1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.044281 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.051626 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.073322 4760 scope.go:117] "RemoveContainer" containerID="801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.074464 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f356b2-7c4e-4ce6-86d5-a6771ef86271-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075316 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075777 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ba00a9-0675-4154-8db7-a3dec9528ce1" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075796 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ba00a9-0675-4154-8db7-a3dec9528ce1" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075820 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637b4ab8-7e6b-4068-993c-5dc8f5975b93" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075829 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="637b4ab8-7e6b-4068-993c-5dc8f5975b93" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075847 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff4c392-598d-40ec-8803-d97ca2429c37" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075855 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff4c392-598d-40ec-8803-d97ca2429c37" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075873 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075881 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075900 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0993c794-4a24-476a-b473-ea84948835cd" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075925 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0993c794-4a24-476a-b473-ea84948835cd" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075937 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-central-agent" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075945 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-central-agent" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075956 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="proxy-httpd" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075963 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="proxy-httpd" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.075979 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon-log" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.075987 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon-log" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.076002 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-notification-agent" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076010 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-notification-agent" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.076030 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee850d3-ea88-45ef-9a47-56cfe91d2c36" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076038 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee850d3-ea88-45ef-9a47-56cfe91d2c36" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.076057 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="sg-core" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076064 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="sg-core" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.076078 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8065060f-1c06-4186-8a41-e864d9256d7b" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076086 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8065060f-1c06-4186-8a41-e864d9256d7b" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076366 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0993c794-4a24-476a-b473-ea84948835cd" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076391 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8065060f-1c06-4186-8a41-e864d9256d7b" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076403 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076422 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee850d3-ea88-45ef-9a47-56cfe91d2c36" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076433 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-notification-agent" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076446 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="sg-core" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076461 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ba00a9-0675-4154-8db7-a3dec9528ce1" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076475 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="637b4ab8-7e6b-4068-993c-5dc8f5975b93" containerName="mariadb-account-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076487 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff4c392-598d-40ec-8803-d97ca2429c37" containerName="mariadb-database-create" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076501 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="ceilometer-central-agent" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076511 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed86ba5-c330-411e-bab0-88e86ceb8980" containerName="horizon-log" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.076524 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" containerName="proxy-httpd" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.079802 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.082790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.083196 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.091605 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.108036 4760 scope.go:117] "RemoveContainer" containerID="39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.136442 4760 scope.go:117] "RemoveContainer" containerID="54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.137803 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa\": container with ID starting with 54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa not found: ID does not exist" containerID="54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.137841 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa"} err="failed to get container status \"54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa\": rpc error: code = NotFound desc = could not find container \"54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa\": container with ID starting with 54db15edce4a2b67df856d6029a4d02e420272740135f45abacacf75989ab2aa not found: ID does not exist" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.137862 4760 scope.go:117] "RemoveContainer" containerID="1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.138391 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965\": container with ID starting with 1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965 not found: ID does not exist" containerID="1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.138438 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965"} err="failed to get container status \"1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965\": rpc error: code = NotFound desc = could not find container \"1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965\": container with ID starting with 1553d9e119bd29a359d40513387b01f2c633fabbc408fa5a653cd8d3272e6965 not found: ID does not exist" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.138457 4760 scope.go:117] "RemoveContainer" containerID="801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.138756 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20\": container with ID starting with 801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20 not found: ID does not exist" containerID="801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.138785 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20"} err="failed to get container status \"801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20\": rpc error: code = NotFound desc = could not find container \"801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20\": container with ID starting with 801e99c5d63e3d0bfc23ee01963d1314d432f19f5c804380b7663bbe85a42c20 not found: ID does not exist" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.138803 4760 scope.go:117] "RemoveContainer" containerID="39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8" Nov 25 08:29:44 crc kubenswrapper[4760]: E1125 08:29:44.139590 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8\": container with ID starting with 39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8 not found: ID does not exist" containerID="39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.139632 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8"} err="failed to get container status \"39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8\": rpc error: code = NotFound desc = could not find container \"39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8\": container with ID starting with 39dce73e70e0cc76d8c6b02f35f5427e7bf40c73e677daf7bb485ba613452fc8 not found: ID does not exist" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.277864 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2s8j\" (UniqueName: \"kubernetes.io/projected/0ad91a00-7be1-4543-9def-eac01e503bc7-kube-api-access-b2s8j\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.277926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.278049 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-run-httpd\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.278083 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.278528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-config-data\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.278611 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-scripts\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.278669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-log-httpd\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.380555 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-config-data\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.380942 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-scripts\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.381066 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-log-httpd\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.381186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2s8j\" (UniqueName: \"kubernetes.io/projected/0ad91a00-7be1-4543-9def-eac01e503bc7-kube-api-access-b2s8j\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.381360 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.381500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-run-httpd\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.381617 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.382069 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-log-httpd\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.382155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-run-httpd\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.385694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-config-data\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.386055 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.386880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.387230 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-scripts\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.399143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2s8j\" (UniqueName: \"kubernetes.io/projected/0ad91a00-7be1-4543-9def-eac01e503bc7-kube-api-access-b2s8j\") pod \"ceilometer-0\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.416952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.876990 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:29:44 crc kubenswrapper[4760]: W1125 08:29:44.882340 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ad91a00_7be1_4543_9def_eac01e503bc7.slice/crio-6b26f0ca4c78bd2a5810e89fbd0580c58a44bc43b05227c382244fa455136153 WatchSource:0}: Error finding container 6b26f0ca4c78bd2a5810e89fbd0580c58a44bc43b05227c382244fa455136153: Status 404 returned error can't find the container with id 6b26f0ca4c78bd2a5810e89fbd0580c58a44bc43b05227c382244fa455136153 Nov 25 08:29:44 crc kubenswrapper[4760]: I1125 08:29:44.958260 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f356b2-7c4e-4ce6-86d5-a6771ef86271" path="/var/lib/kubelet/pods/33f356b2-7c4e-4ce6-86d5-a6771ef86271/volumes" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.011895 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerStarted","Data":"6b26f0ca4c78bd2a5810e89fbd0580c58a44bc43b05227c382244fa455136153"} Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.641051 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gvr9l"] Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.644482 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.649268 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.649727 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.649734 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gczdm" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.659641 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gvr9l"] Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.705071 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-config-data\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.705175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.705202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-scripts\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.705268 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pn2\" (UniqueName: \"kubernetes.io/projected/36f30b80-e115-44c9-8995-f09ee775ce7b-kube-api-access-25pn2\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.805947 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-config-data\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.806361 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.806481 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-scripts\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.806652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25pn2\" (UniqueName: \"kubernetes.io/projected/36f30b80-e115-44c9-8995-f09ee775ce7b-kube-api-access-25pn2\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.812328 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.812531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-scripts\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.814718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-config-data\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.827705 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pn2\" (UniqueName: \"kubernetes.io/projected/36f30b80-e115-44c9-8995-f09ee775ce7b-kube-api-access-25pn2\") pod \"nova-cell0-conductor-db-sync-gvr9l\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:45 crc kubenswrapper[4760]: I1125 08:29:45.962513 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:29:46 crc kubenswrapper[4760]: I1125 08:29:46.439828 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gvr9l"] Nov 25 08:29:47 crc kubenswrapper[4760]: I1125 08:29:47.038032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" event={"ID":"36f30b80-e115-44c9-8995-f09ee775ce7b","Type":"ContainerStarted","Data":"73eaa7e559323f98244d9c17302a6f8d78d5e05195d331c5f6fa59b420a156d0"} Nov 25 08:29:47 crc kubenswrapper[4760]: I1125 08:29:47.041187 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerStarted","Data":"e9f27c864347d81879449c4a65f13a5c7477167b6dfeb9748e9a5aa8347e6cb4"} Nov 25 08:29:48 crc kubenswrapper[4760]: I1125 08:29:48.051693 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerStarted","Data":"83b55ac537070eaead07855c0b03320cc501615093b44933acbe219cde306dc0"} Nov 25 08:29:49 crc kubenswrapper[4760]: I1125 08:29:49.062943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerStarted","Data":"fdea27ee4298a4fc35f6b6c4ff6e9838adaddfa93f4df5c6643bf911d5ad65b8"} Nov 25 08:29:52 crc kubenswrapper[4760]: E1125 08:29:52.455194 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499ed65_d46c_4e61_b113_06350f33838c.slice/crio-0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377\": RecentStats: unable to find data in memory cache]" Nov 25 08:29:54 crc kubenswrapper[4760]: I1125 08:29:54.132988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" event={"ID":"36f30b80-e115-44c9-8995-f09ee775ce7b","Type":"ContainerStarted","Data":"49049853a94d1f10b388fdd15cdd1b37778a3435229c40fc9e75dd19ea42d278"} Nov 25 08:29:54 crc kubenswrapper[4760]: I1125 08:29:54.136844 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerStarted","Data":"d6119cb9cc6cd9c37368527c27bf0cdbc1b1536636c19d3475a7ab18b2c1c1e0"} Nov 25 08:29:54 crc kubenswrapper[4760]: I1125 08:29:54.137667 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:29:54 crc kubenswrapper[4760]: I1125 08:29:54.191024 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" podStartSLOduration=2.184169964 podStartE2EDuration="9.190994132s" podCreationTimestamp="2025-11-25 08:29:45 +0000 UTC" firstStartedPulling="2025-11-25 08:29:46.481343988 +0000 UTC m=+1120.190374783" lastFinishedPulling="2025-11-25 08:29:53.488168156 +0000 UTC m=+1127.197198951" observedRunningTime="2025-11-25 08:29:54.152418492 +0000 UTC m=+1127.861449347" watchObservedRunningTime="2025-11-25 08:29:54.190994132 +0000 UTC m=+1127.900024947" Nov 25 08:29:54 crc kubenswrapper[4760]: I1125 08:29:54.191956 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.062926854 podStartE2EDuration="10.19194341s" podCreationTimestamp="2025-11-25 08:29:44 +0000 UTC" firstStartedPulling="2025-11-25 08:29:44.893483784 +0000 UTC m=+1118.602514579" lastFinishedPulling="2025-11-25 08:29:53.02250034 +0000 UTC m=+1126.731531135" observedRunningTime="2025-11-25 08:29:54.187522682 +0000 UTC m=+1127.896553477" watchObservedRunningTime="2025-11-25 08:29:54.19194341 +0000 UTC m=+1127.900974225" Nov 25 08:29:54 crc kubenswrapper[4760]: I1125 08:29:54.975685 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30ead1cc-7ac6-4208-ba63-d5e41160e015" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.155:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.151316 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4"] Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.153364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.155622 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.155919 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.163688 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4"] Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.280351 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9012ddf-738f-4b3e-99ce-0aab039a4171-config-volume\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.280620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9012ddf-738f-4b3e-99ce-0aab039a4171-secret-volume\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.280707 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk5l6\" (UniqueName: \"kubernetes.io/projected/a9012ddf-738f-4b3e-99ce-0aab039a4171-kube-api-access-mk5l6\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.382513 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9012ddf-738f-4b3e-99ce-0aab039a4171-secret-volume\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.382565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk5l6\" (UniqueName: \"kubernetes.io/projected/a9012ddf-738f-4b3e-99ce-0aab039a4171-kube-api-access-mk5l6\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.382650 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9012ddf-738f-4b3e-99ce-0aab039a4171-config-volume\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.383517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9012ddf-738f-4b3e-99ce-0aab039a4171-config-volume\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.391780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9012ddf-738f-4b3e-99ce-0aab039a4171-secret-volume\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.399139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk5l6\" (UniqueName: \"kubernetes.io/projected/a9012ddf-738f-4b3e-99ce-0aab039a4171-kube-api-access-mk5l6\") pod \"collect-profiles-29400990-zl6p4\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.473686 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:00 crc kubenswrapper[4760]: I1125 08:30:00.891700 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4"] Nov 25 08:30:01 crc kubenswrapper[4760]: I1125 08:30:01.201311 4760 generic.go:334] "Generic (PLEG): container finished" podID="a9012ddf-738f-4b3e-99ce-0aab039a4171" containerID="eae7eff043228114d341ca5d73e425432439abacff20b95fd2a9adbe2be14cf7" exitCode=0 Nov 25 08:30:01 crc kubenswrapper[4760]: I1125 08:30:01.201433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" event={"ID":"a9012ddf-738f-4b3e-99ce-0aab039a4171","Type":"ContainerDied","Data":"eae7eff043228114d341ca5d73e425432439abacff20b95fd2a9adbe2be14cf7"} Nov 25 08:30:01 crc kubenswrapper[4760]: I1125 08:30:01.201661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" event={"ID":"a9012ddf-738f-4b3e-99ce-0aab039a4171","Type":"ContainerStarted","Data":"6ffe76c769fdcfcf734d5f8f5900beedb312dfff06ea3ccb7e7208880ddcbcc7"} Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.545774 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.621377 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9012ddf-738f-4b3e-99ce-0aab039a4171-config-volume\") pod \"a9012ddf-738f-4b3e-99ce-0aab039a4171\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.621423 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9012ddf-738f-4b3e-99ce-0aab039a4171-secret-volume\") pod \"a9012ddf-738f-4b3e-99ce-0aab039a4171\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.621545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk5l6\" (UniqueName: \"kubernetes.io/projected/a9012ddf-738f-4b3e-99ce-0aab039a4171-kube-api-access-mk5l6\") pod \"a9012ddf-738f-4b3e-99ce-0aab039a4171\" (UID: \"a9012ddf-738f-4b3e-99ce-0aab039a4171\") " Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.622107 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9012ddf-738f-4b3e-99ce-0aab039a4171-config-volume" (OuterVolumeSpecName: "config-volume") pod "a9012ddf-738f-4b3e-99ce-0aab039a4171" (UID: "a9012ddf-738f-4b3e-99ce-0aab039a4171"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.622722 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9012ddf-738f-4b3e-99ce-0aab039a4171-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.627408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9012ddf-738f-4b3e-99ce-0aab039a4171-kube-api-access-mk5l6" (OuterVolumeSpecName: "kube-api-access-mk5l6") pod "a9012ddf-738f-4b3e-99ce-0aab039a4171" (UID: "a9012ddf-738f-4b3e-99ce-0aab039a4171"). InnerVolumeSpecName "kube-api-access-mk5l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.628040 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9012ddf-738f-4b3e-99ce-0aab039a4171-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a9012ddf-738f-4b3e-99ce-0aab039a4171" (UID: "a9012ddf-738f-4b3e-99ce-0aab039a4171"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:02 crc kubenswrapper[4760]: E1125 08:30:02.693996 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499ed65_d46c_4e61_b113_06350f33838c.slice/crio-0f6ae112f25cd0d39059cf682ef8af492b86d6bb3a80856da9ff7dec1873f377\": RecentStats: unable to find data in memory cache]" Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.724536 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9012ddf-738f-4b3e-99ce-0aab039a4171-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:02 crc kubenswrapper[4760]: I1125 08:30:02.724819 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk5l6\" (UniqueName: \"kubernetes.io/projected/a9012ddf-738f-4b3e-99ce-0aab039a4171-kube-api-access-mk5l6\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:03 crc kubenswrapper[4760]: I1125 08:30:03.223878 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" event={"ID":"a9012ddf-738f-4b3e-99ce-0aab039a4171","Type":"ContainerDied","Data":"6ffe76c769fdcfcf734d5f8f5900beedb312dfff06ea3ccb7e7208880ddcbcc7"} Nov 25 08:30:03 crc kubenswrapper[4760]: I1125 08:30:03.223916 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffe76c769fdcfcf734d5f8f5900beedb312dfff06ea3ccb7e7208880ddcbcc7" Nov 25 08:30:03 crc kubenswrapper[4760]: I1125 08:30:03.223935 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4" Nov 25 08:30:04 crc kubenswrapper[4760]: I1125 08:30:04.234351 4760 generic.go:334] "Generic (PLEG): container finished" podID="36f30b80-e115-44c9-8995-f09ee775ce7b" containerID="49049853a94d1f10b388fdd15cdd1b37778a3435229c40fc9e75dd19ea42d278" exitCode=0 Nov 25 08:30:04 crc kubenswrapper[4760]: I1125 08:30:04.234439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" event={"ID":"36f30b80-e115-44c9-8995-f09ee775ce7b","Type":"ContainerDied","Data":"49049853a94d1f10b388fdd15cdd1b37778a3435229c40fc9e75dd19ea42d278"} Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.599590 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.672031 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25pn2\" (UniqueName: \"kubernetes.io/projected/36f30b80-e115-44c9-8995-f09ee775ce7b-kube-api-access-25pn2\") pod \"36f30b80-e115-44c9-8995-f09ee775ce7b\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.672168 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-config-data\") pod \"36f30b80-e115-44c9-8995-f09ee775ce7b\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.672388 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-scripts\") pod \"36f30b80-e115-44c9-8995-f09ee775ce7b\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.672525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-combined-ca-bundle\") pod \"36f30b80-e115-44c9-8995-f09ee775ce7b\" (UID: \"36f30b80-e115-44c9-8995-f09ee775ce7b\") " Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.678269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-scripts" (OuterVolumeSpecName: "scripts") pod "36f30b80-e115-44c9-8995-f09ee775ce7b" (UID: "36f30b80-e115-44c9-8995-f09ee775ce7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.687522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f30b80-e115-44c9-8995-f09ee775ce7b-kube-api-access-25pn2" (OuterVolumeSpecName: "kube-api-access-25pn2") pod "36f30b80-e115-44c9-8995-f09ee775ce7b" (UID: "36f30b80-e115-44c9-8995-f09ee775ce7b"). InnerVolumeSpecName "kube-api-access-25pn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.702541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-config-data" (OuterVolumeSpecName: "config-data") pod "36f30b80-e115-44c9-8995-f09ee775ce7b" (UID: "36f30b80-e115-44c9-8995-f09ee775ce7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.706979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36f30b80-e115-44c9-8995-f09ee775ce7b" (UID: "36f30b80-e115-44c9-8995-f09ee775ce7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.775366 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.775394 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.775403 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25pn2\" (UniqueName: \"kubernetes.io/projected/36f30b80-e115-44c9-8995-f09ee775ce7b-kube-api-access-25pn2\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:05 crc kubenswrapper[4760]: I1125 08:30:05.775413 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f30b80-e115-44c9-8995-f09ee775ce7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.256765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" event={"ID":"36f30b80-e115-44c9-8995-f09ee775ce7b","Type":"ContainerDied","Data":"73eaa7e559323f98244d9c17302a6f8d78d5e05195d331c5f6fa59b420a156d0"} Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.256807 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73eaa7e559323f98244d9c17302a6f8d78d5e05195d331c5f6fa59b420a156d0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.256888 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gvr9l" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.333981 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 08:30:06 crc kubenswrapper[4760]: E1125 08:30:06.334415 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f30b80-e115-44c9-8995-f09ee775ce7b" containerName="nova-cell0-conductor-db-sync" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.334439 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f30b80-e115-44c9-8995-f09ee775ce7b" containerName="nova-cell0-conductor-db-sync" Nov 25 08:30:06 crc kubenswrapper[4760]: E1125 08:30:06.334474 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9012ddf-738f-4b3e-99ce-0aab039a4171" containerName="collect-profiles" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.334481 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9012ddf-738f-4b3e-99ce-0aab039a4171" containerName="collect-profiles" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.334693 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f30b80-e115-44c9-8995-f09ee775ce7b" containerName="nova-cell0-conductor-db-sync" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.334724 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9012ddf-738f-4b3e-99ce-0aab039a4171" containerName="collect-profiles" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.339518 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.343644 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.344502 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gczdm" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.352054 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.486884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cadcf-b35a-4f88-9f0a-684f735164a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.486943 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tl9q\" (UniqueName: \"kubernetes.io/projected/8e3cadcf-b35a-4f88-9f0a-684f735164a0-kube-api-access-5tl9q\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.487075 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cadcf-b35a-4f88-9f0a-684f735164a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.589042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cadcf-b35a-4f88-9f0a-684f735164a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.589097 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tl9q\" (UniqueName: \"kubernetes.io/projected/8e3cadcf-b35a-4f88-9f0a-684f735164a0-kube-api-access-5tl9q\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.589120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cadcf-b35a-4f88-9f0a-684f735164a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.596593 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cadcf-b35a-4f88-9f0a-684f735164a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.600198 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cadcf-b35a-4f88-9f0a-684f735164a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.610020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tl9q\" (UniqueName: \"kubernetes.io/projected/8e3cadcf-b35a-4f88-9f0a-684f735164a0-kube-api-access-5tl9q\") pod \"nova-cell0-conductor-0\" (UID: \"8e3cadcf-b35a-4f88-9f0a-684f735164a0\") " pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:06 crc kubenswrapper[4760]: I1125 08:30:06.661688 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:07 crc kubenswrapper[4760]: I1125 08:30:07.106103 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 08:30:07 crc kubenswrapper[4760]: W1125 08:30:07.110929 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e3cadcf_b35a_4f88_9f0a_684f735164a0.slice/crio-6a17652b7b7bcfe784114bb96e42670dd590902c59ce9f83e89a272f2b6c1092 WatchSource:0}: Error finding container 6a17652b7b7bcfe784114bb96e42670dd590902c59ce9f83e89a272f2b6c1092: Status 404 returned error can't find the container with id 6a17652b7b7bcfe784114bb96e42670dd590902c59ce9f83e89a272f2b6c1092 Nov 25 08:30:07 crc kubenswrapper[4760]: I1125 08:30:07.266416 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8e3cadcf-b35a-4f88-9f0a-684f735164a0","Type":"ContainerStarted","Data":"6a17652b7b7bcfe784114bb96e42670dd590902c59ce9f83e89a272f2b6c1092"} Nov 25 08:30:08 crc kubenswrapper[4760]: I1125 08:30:08.274885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8e3cadcf-b35a-4f88-9f0a-684f735164a0","Type":"ContainerStarted","Data":"b2e65a87a7e7942a2e8fffbc9dcdf30c80aa418bbda537ec02535e0872874fc3"} Nov 25 08:30:08 crc kubenswrapper[4760]: I1125 08:30:08.276454 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:08 crc kubenswrapper[4760]: I1125 08:30:08.292369 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.292347472 podStartE2EDuration="2.292347472s" podCreationTimestamp="2025-11-25 08:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:08.289234353 +0000 UTC m=+1141.998265148" watchObservedRunningTime="2025-11-25 08:30:08.292347472 +0000 UTC m=+1142.001378267" Nov 25 08:30:14 crc kubenswrapper[4760]: I1125 08:30:14.421537 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 08:30:16 crc kubenswrapper[4760]: I1125 08:30:16.693857 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.125448 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.125657 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="50f445d9-b3be-421d-b30a-89759c1ad2e8" containerName="kube-state-metrics" containerID="cri-o://00c7ebe103517c8eb5440b169de110b1f929b8edd695ff408bf0803c5d8e40f1" gracePeriod=30 Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.234583 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zv989"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.236214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.242715 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.244071 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.253180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zv989"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.374735 4760 generic.go:334] "Generic (PLEG): container finished" podID="50f445d9-b3be-421d-b30a-89759c1ad2e8" containerID="00c7ebe103517c8eb5440b169de110b1f929b8edd695ff408bf0803c5d8e40f1" exitCode=2 Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.374799 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"50f445d9-b3be-421d-b30a-89759c1ad2e8","Type":"ContainerDied","Data":"00c7ebe103517c8eb5440b169de110b1f929b8edd695ff408bf0803c5d8e40f1"} Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.399530 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49mf\" (UniqueName: \"kubernetes.io/projected/152d5f92-3188-4d96-8594-455aacbb0e4a-kube-api-access-h49mf\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.399684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-config-data\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.399719 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-scripts\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.399783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.446337 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.447844 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.458307 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.459445 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.461505 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.464056 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.468075 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.492307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.502329 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49mf\" (UniqueName: \"kubernetes.io/projected/152d5f92-3188-4d96-8594-455aacbb0e4a-kube-api-access-h49mf\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.502428 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-config-data\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.502455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-scripts\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.502500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.514846 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-scripts\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.514926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-config-data\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.521949 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49mf\" (UniqueName: \"kubernetes.io/projected/152d5f92-3188-4d96-8594-455aacbb0e4a-kube-api-access-h49mf\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.527239 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.544873 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.544974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zv989\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.549464 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.551031 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.603556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-config-data\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.604501 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.604557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nwwm\" (UniqueName: \"kubernetes.io/projected/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-kube-api-access-7nwwm\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.604621 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf252\" (UniqueName: \"kubernetes.io/projected/fd9e4283-c692-42e6-9205-d00799923720-kube-api-access-mf252\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.604721 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.604871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-config-data\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.604909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-logs\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.613450 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706444 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706495 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706549 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-config-data\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-logs\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-config-data\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706717 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706783 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nwwm\" (UniqueName: \"kubernetes.io/projected/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-kube-api-access-7nwwm\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706802 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8vt\" (UniqueName: \"kubernetes.io/projected/c606d7ed-7669-4df1-bc31-851c14fdbc73-kube-api-access-8z8vt\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.706866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf252\" (UniqueName: \"kubernetes.io/projected/fd9e4283-c692-42e6-9205-d00799923720-kube-api-access-mf252\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.717695 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-logs\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.718692 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-config-data\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.723833 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-config-data\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.733492 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.734106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.735219 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.736027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.738533 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.742692 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nwwm\" (UniqueName: \"kubernetes.io/projected/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-kube-api-access-7nwwm\") pod \"nova-api-0\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.752760 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.755152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf252\" (UniqueName: \"kubernetes.io/projected/fd9e4283-c692-42e6-9205-d00799923720-kube-api-access-mf252\") pod \"nova-scheduler-0\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.773228 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.806392 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812114 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812591 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4n7n\" (UniqueName: \"kubernetes.io/projected/a2422512-0bfe-4e14-be52-d3ced671911b-kube-api-access-r4n7n\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-config-data\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812748 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812768 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2422512-0bfe-4e14-be52-d3ced671911b-logs\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.812785 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8vt\" (UniqueName: \"kubernetes.io/projected/c606d7ed-7669-4df1-bc31-851c14fdbc73-kube-api-access-8z8vt\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.817426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.823922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.832722 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8vt\" (UniqueName: \"kubernetes.io/projected/c606d7ed-7669-4df1-bc31-851c14fdbc73-kube-api-access-8z8vt\") pod \"nova-cell1-novncproxy-0\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.844570 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69494d9f89-bwbsn"] Nov 25 08:30:17 crc kubenswrapper[4760]: E1125 08:30:17.844958 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f445d9-b3be-421d-b30a-89759c1ad2e8" containerName="kube-state-metrics" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.844971 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f445d9-b3be-421d-b30a-89759c1ad2e8" containerName="kube-state-metrics" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.845128 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f445d9-b3be-421d-b30a-89759c1ad2e8" containerName="kube-state-metrics" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.846096 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.862644 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69494d9f89-bwbsn"] Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.914469 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxwc4\" (UniqueName: \"kubernetes.io/projected/50f445d9-b3be-421d-b30a-89759c1ad2e8-kube-api-access-hxwc4\") pod \"50f445d9-b3be-421d-b30a-89759c1ad2e8\" (UID: \"50f445d9-b3be-421d-b30a-89759c1ad2e8\") " Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.914959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.915030 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2422512-0bfe-4e14-be52-d3ced671911b-logs\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.915137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-dns-svc\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.915289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4n7n\" (UniqueName: \"kubernetes.io/projected/a2422512-0bfe-4e14-be52-d3ced671911b-kube-api-access-r4n7n\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.915324 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-sb\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.916170 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-config\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.916205 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-nb\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.916305 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-config-data\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.915989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2422512-0bfe-4e14-be52-d3ced671911b-logs\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.916397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvd5\" (UniqueName: \"kubernetes.io/projected/54bb51e4-6152-41e2-9489-b06e33c16177-kube-api-access-gwvd5\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.917189 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.918819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.919542 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f445d9-b3be-421d-b30a-89759c1ad2e8-kube-api-access-hxwc4" (OuterVolumeSpecName: "kube-api-access-hxwc4") pod "50f445d9-b3be-421d-b30a-89759c1ad2e8" (UID: "50f445d9-b3be-421d-b30a-89759c1ad2e8"). InnerVolumeSpecName "kube-api-access-hxwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.921111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-config-data\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:17 crc kubenswrapper[4760]: I1125 08:30:17.934515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4n7n\" (UniqueName: \"kubernetes.io/projected/a2422512-0bfe-4e14-be52-d3ced671911b-kube-api-access-r4n7n\") pod \"nova-metadata-0\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " pod="openstack/nova-metadata-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.018518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-dns-svc\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.018874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-sb\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.018924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-config\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.018946 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-nb\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.019026 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwvd5\" (UniqueName: \"kubernetes.io/projected/54bb51e4-6152-41e2-9489-b06e33c16177-kube-api-access-gwvd5\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.019166 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxwc4\" (UniqueName: \"kubernetes.io/projected/50f445d9-b3be-421d-b30a-89759c1ad2e8-kube-api-access-hxwc4\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.021624 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-dns-svc\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.021924 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-config\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.022161 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-nb\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.022735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-sb\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.048530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwvd5\" (UniqueName: \"kubernetes.io/projected/54bb51e4-6152-41e2-9489-b06e33c16177-kube-api-access-gwvd5\") pod \"dnsmasq-dns-69494d9f89-bwbsn\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.074033 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.184421 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.391408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"50f445d9-b3be-421d-b30a-89759c1ad2e8","Type":"ContainerDied","Data":"3fb6dd4a0a3e7b3c8ca0bb7742f1448ac2df466ab85aff728dd32d652d5c8655"} Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.391709 4760 scope.go:117] "RemoveContainer" containerID="00c7ebe103517c8eb5440b169de110b1f929b8edd695ff408bf0803c5d8e40f1" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.391494 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.435655 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.441298 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.449365 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: W1125 08:30:18.452470 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod863b3b4b_8314_4de5_9f3d_29b7028dbd6e.slice/crio-75083a3bf7a434aaf9df366fea5abbd5528933a557004b6e3d9b235263259f94 WatchSource:0}: Error finding container 75083a3bf7a434aaf9df366fea5abbd5528933a557004b6e3d9b235263259f94: Status 404 returned error can't find the container with id 75083a3bf7a434aaf9df366fea5abbd5528933a557004b6e3d9b235263259f94 Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.456497 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.457668 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.460124 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.460220 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.480677 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.528560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.528644 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x94j8\" (UniqueName: \"kubernetes.io/projected/bd20932f-cb28-4343-98df-425123f7c87f-kube-api-access-x94j8\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.528692 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.528715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.545923 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dq2fl"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.547119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.550011 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.554549 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.560595 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dq2fl"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.626022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zv989"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x94j8\" (UniqueName: \"kubernetes.io/projected/bd20932f-cb28-4343-98df-425123f7c87f-kube-api-access-x94j8\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630143 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-scripts\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjj2\" (UniqueName: \"kubernetes.io/projected/c44f13d4-c189-4609-944a-3dbaaee53e6b-kube-api-access-mmjj2\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.630400 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-config-data\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.643664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.660351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.662004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20932f-cb28-4343-98df-425123f7c87f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.695743 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x94j8\" (UniqueName: \"kubernetes.io/projected/bd20932f-cb28-4343-98df-425123f7c87f-kube-api-access-x94j8\") pod \"kube-state-metrics-0\" (UID: \"bd20932f-cb28-4343-98df-425123f7c87f\") " pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.742177 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-config-data\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.742307 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-scripts\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.742337 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.742359 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmjj2\" (UniqueName: \"kubernetes.io/projected/c44f13d4-c189-4609-944a-3dbaaee53e6b-kube-api-access-mmjj2\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.750906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-config-data\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.751786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.758367 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-scripts\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.762022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmjj2\" (UniqueName: \"kubernetes.io/projected/c44f13d4-c189-4609-944a-3dbaaee53e6b-kube-api-access-mmjj2\") pod \"nova-cell1-conductor-db-sync-dq2fl\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.765998 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: W1125 08:30:18.790292 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc606d7ed_7669_4df1_bc31_851c14fdbc73.slice/crio-978879b30522efa8d0bd0e51ed1a565bf3d5644465d1a8ce440a773bb1c6c3c7 WatchSource:0}: Error finding container 978879b30522efa8d0bd0e51ed1a565bf3d5644465d1a8ce440a773bb1c6c3c7: Status 404 returned error can't find the container with id 978879b30522efa8d0bd0e51ed1a565bf3d5644465d1a8ce440a773bb1c6c3c7 Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.795740 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.810013 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.824315 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.845552 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.845871 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-central-agent" containerID="cri-o://e9f27c864347d81879449c4a65f13a5c7477167b6dfeb9748e9a5aa8347e6cb4" gracePeriod=30 Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.845990 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="proxy-httpd" containerID="cri-o://d6119cb9cc6cd9c37368527c27bf0cdbc1b1536636c19d3475a7ab18b2c1c1e0" gracePeriod=30 Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.846033 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="sg-core" containerID="cri-o://fdea27ee4298a4fc35f6b6c4ff6e9838adaddfa93f4df5c6643bf911d5ad65b8" gracePeriod=30 Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.846067 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-notification-agent" containerID="cri-o://83b55ac537070eaead07855c0b03320cc501615093b44933acbe219cde306dc0" gracePeriod=30 Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.867430 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.991965 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f445d9-b3be-421d-b30a-89759c1ad2e8" path="/var/lib/kubelet/pods/50f445d9-b3be-421d-b30a-89759c1ad2e8/volumes" Nov 25 08:30:18 crc kubenswrapper[4760]: I1125 08:30:18.993362 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69494d9f89-bwbsn"] Nov 25 08:30:18 crc kubenswrapper[4760]: W1125 08:30:18.993446 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54bb51e4_6152_41e2_9489_b06e33c16177.slice/crio-9070d983632fc8fa7c3e75af2c30009baed676026eb5dc4c5852824bc91bb936 WatchSource:0}: Error finding container 9070d983632fc8fa7c3e75af2c30009baed676026eb5dc4c5852824bc91bb936: Status 404 returned error can't find the container with id 9070d983632fc8fa7c3e75af2c30009baed676026eb5dc4c5852824bc91bb936 Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.418671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.565307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dq2fl"] Nov 25 08:30:19 crc kubenswrapper[4760]: W1125 08:30:19.575144 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd20932f_cb28_4343_98df_425123f7c87f.slice/crio-3ce1a0325614f813438f7da211a1849cf4f2a6655bb45caafdd4bd9a50160dec WatchSource:0}: Error finding container 3ce1a0325614f813438f7da211a1849cf4f2a6655bb45caafdd4bd9a50160dec: Status 404 returned error can't find the container with id 3ce1a0325614f813438f7da211a1849cf4f2a6655bb45caafdd4bd9a50160dec Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.584190 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c606d7ed-7669-4df1-bc31-851c14fdbc73","Type":"ContainerStarted","Data":"978879b30522efa8d0bd0e51ed1a565bf3d5644465d1a8ce440a773bb1c6c3c7"} Nov 25 08:30:19 crc kubenswrapper[4760]: W1125 08:30:19.587398 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc44f13d4_c189_4609_944a_3dbaaee53e6b.slice/crio-a5dba49b7749d51beeb743a98450d1b43c66bf6f9cba701730b2093bcd3f233c WatchSource:0}: Error finding container a5dba49b7749d51beeb743a98450d1b43c66bf6f9cba701730b2093bcd3f233c: Status 404 returned error can't find the container with id a5dba49b7749d51beeb743a98450d1b43c66bf6f9cba701730b2093bcd3f233c Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.593834 4760 generic.go:334] "Generic (PLEG): container finished" podID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerID="d6119cb9cc6cd9c37368527c27bf0cdbc1b1536636c19d3475a7ab18b2c1c1e0" exitCode=0 Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.593864 4760 generic.go:334] "Generic (PLEG): container finished" podID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerID="fdea27ee4298a4fc35f6b6c4ff6e9838adaddfa93f4df5c6643bf911d5ad65b8" exitCode=2 Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.593873 4760 generic.go:334] "Generic (PLEG): container finished" podID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerID="e9f27c864347d81879449c4a65f13a5c7477167b6dfeb9748e9a5aa8347e6cb4" exitCode=0 Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.593925 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerDied","Data":"d6119cb9cc6cd9c37368527c27bf0cdbc1b1536636c19d3475a7ab18b2c1c1e0"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.593956 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerDied","Data":"fdea27ee4298a4fc35f6b6c4ff6e9838adaddfa93f4df5c6643bf911d5ad65b8"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.593971 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerDied","Data":"e9f27c864347d81879449c4a65f13a5c7477167b6dfeb9748e9a5aa8347e6cb4"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.597512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zv989" event={"ID":"152d5f92-3188-4d96-8594-455aacbb0e4a","Type":"ContainerStarted","Data":"0fc31a4aea11467b98541fdd66687da138fb33c403417abf6c44e3d343da5fce"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.597558 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zv989" event={"ID":"152d5f92-3188-4d96-8594-455aacbb0e4a","Type":"ContainerStarted","Data":"1026068d3640db6f9cc8f004e4d794ef1163057892588beb9c2dde9b55c6a158"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.601025 4760 generic.go:334] "Generic (PLEG): container finished" podID="54bb51e4-6152-41e2-9489-b06e33c16177" containerID="8713dae99663bc6d5635b5873d189fc8ab82b435b748850967d31591a558cb0a" exitCode=0 Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.601087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" event={"ID":"54bb51e4-6152-41e2-9489-b06e33c16177","Type":"ContainerDied","Data":"8713dae99663bc6d5635b5873d189fc8ab82b435b748850967d31591a558cb0a"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.601118 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" event={"ID":"54bb51e4-6152-41e2-9489-b06e33c16177","Type":"ContainerStarted","Data":"9070d983632fc8fa7c3e75af2c30009baed676026eb5dc4c5852824bc91bb936"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.607594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd9e4283-c692-42e6-9205-d00799923720","Type":"ContainerStarted","Data":"a396103881395577049753106d03116faca4ae1c5284a32e5846fbd5f805f186"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.610889 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2422512-0bfe-4e14-be52-d3ced671911b","Type":"ContainerStarted","Data":"29110523d44cf4de8e8a766e83815f848d12a8e951fd84f74da282dce4b6402c"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.622122 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863b3b4b-8314-4de5-9f3d-29b7028dbd6e","Type":"ContainerStarted","Data":"75083a3bf7a434aaf9df366fea5abbd5528933a557004b6e3d9b235263259f94"} Nov 25 08:30:19 crc kubenswrapper[4760]: I1125 08:30:19.623692 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zv989" podStartSLOduration=2.623671511 podStartE2EDuration="2.623671511s" podCreationTimestamp="2025-11-25 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:19.616601687 +0000 UTC m=+1153.325632522" watchObservedRunningTime="2025-11-25 08:30:19.623671511 +0000 UTC m=+1153.332702306" Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.667492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" event={"ID":"54bb51e4-6152-41e2-9489-b06e33c16177","Type":"ContainerStarted","Data":"91bbd968de891dd2c3721d8043ea159565fda7da0c970a5aec82886f9b908206"} Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.667879 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.669875 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" event={"ID":"c44f13d4-c189-4609-944a-3dbaaee53e6b","Type":"ContainerStarted","Data":"b28632d12bc38d13a25dc1f56ef8f3c8e1dc901574857179c4ed50b4a6e4276b"} Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.669922 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" event={"ID":"c44f13d4-c189-4609-944a-3dbaaee53e6b","Type":"ContainerStarted","Data":"a5dba49b7749d51beeb743a98450d1b43c66bf6f9cba701730b2093bcd3f233c"} Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.672820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd20932f-cb28-4343-98df-425123f7c87f","Type":"ContainerStarted","Data":"72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b"} Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.672876 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.672897 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd20932f-cb28-4343-98df-425123f7c87f","Type":"ContainerStarted","Data":"3ce1a0325614f813438f7da211a1849cf4f2a6655bb45caafdd4bd9a50160dec"} Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.688791 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" podStartSLOduration=3.688768865 podStartE2EDuration="3.688768865s" podCreationTimestamp="2025-11-25 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:20.68581603 +0000 UTC m=+1154.394846835" watchObservedRunningTime="2025-11-25 08:30:20.688768865 +0000 UTC m=+1154.397799660" Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.710997 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.280446853 podStartE2EDuration="2.710955715s" podCreationTimestamp="2025-11-25 08:30:18 +0000 UTC" firstStartedPulling="2025-11-25 08:30:19.583536715 +0000 UTC m=+1153.292567510" lastFinishedPulling="2025-11-25 08:30:20.014045577 +0000 UTC m=+1153.723076372" observedRunningTime="2025-11-25 08:30:20.701957805 +0000 UTC m=+1154.410988600" watchObservedRunningTime="2025-11-25 08:30:20.710955715 +0000 UTC m=+1154.419986510" Nov 25 08:30:20 crc kubenswrapper[4760]: I1125 08:30:20.727885 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" podStartSLOduration=2.727863932 podStartE2EDuration="2.727863932s" podCreationTimestamp="2025-11-25 08:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:20.719195912 +0000 UTC m=+1154.428226717" watchObservedRunningTime="2025-11-25 08:30:20.727863932 +0000 UTC m=+1154.436894727" Nov 25 08:30:21 crc kubenswrapper[4760]: I1125 08:30:21.688788 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:21 crc kubenswrapper[4760]: I1125 08:30:21.715144 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.743020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863b3b4b-8314-4de5-9f3d-29b7028dbd6e","Type":"ContainerStarted","Data":"2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b"} Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.744709 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863b3b4b-8314-4de5-9f3d-29b7028dbd6e","Type":"ContainerStarted","Data":"3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265"} Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.749764 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c606d7ed-7669-4df1-bc31-851c14fdbc73","Type":"ContainerStarted","Data":"87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24"} Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.749963 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c606d7ed-7669-4df1-bc31-851c14fdbc73" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24" gracePeriod=30 Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.755547 4760 generic.go:334] "Generic (PLEG): container finished" podID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerID="83b55ac537070eaead07855c0b03320cc501615093b44933acbe219cde306dc0" exitCode=0 Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.755650 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerDied","Data":"83b55ac537070eaead07855c0b03320cc501615093b44933acbe219cde306dc0"} Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.758535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd9e4283-c692-42e6-9205-d00799923720","Type":"ContainerStarted","Data":"9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0"} Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.762167 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2422512-0bfe-4e14-be52-d3ced671911b","Type":"ContainerStarted","Data":"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a"} Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.762230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2422512-0bfe-4e14-be52-d3ced671911b","Type":"ContainerStarted","Data":"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73"} Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.762433 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-log" containerID="cri-o://4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73" gracePeriod=30 Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.762612 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-metadata" containerID="cri-o://9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a" gracePeriod=30 Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.791275 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.726303937 podStartE2EDuration="6.791229813s" podCreationTimestamp="2025-11-25 08:30:17 +0000 UTC" firstStartedPulling="2025-11-25 08:30:18.458137744 +0000 UTC m=+1152.167168539" lastFinishedPulling="2025-11-25 08:30:22.52306362 +0000 UTC m=+1156.232094415" observedRunningTime="2025-11-25 08:30:23.779354781 +0000 UTC m=+1157.488385596" watchObservedRunningTime="2025-11-25 08:30:23.791229813 +0000 UTC m=+1157.500260598" Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.814586 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.090124879 podStartE2EDuration="6.814561205s" podCreationTimestamp="2025-11-25 08:30:17 +0000 UTC" firstStartedPulling="2025-11-25 08:30:18.795428851 +0000 UTC m=+1152.504459646" lastFinishedPulling="2025-11-25 08:30:22.519865177 +0000 UTC m=+1156.228895972" observedRunningTime="2025-11-25 08:30:23.804935218 +0000 UTC m=+1157.513966033" watchObservedRunningTime="2025-11-25 08:30:23.814561205 +0000 UTC m=+1157.523592000" Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.829574 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.148178231 podStartE2EDuration="6.829550277s" podCreationTimestamp="2025-11-25 08:30:17 +0000 UTC" firstStartedPulling="2025-11-25 08:30:18.823292454 +0000 UTC m=+1152.532323249" lastFinishedPulling="2025-11-25 08:30:22.50466447 +0000 UTC m=+1156.213695295" observedRunningTime="2025-11-25 08:30:23.821070423 +0000 UTC m=+1157.530101218" watchObservedRunningTime="2025-11-25 08:30:23.829550277 +0000 UTC m=+1157.538581072" Nov 25 08:30:23 crc kubenswrapper[4760]: I1125 08:30:23.838223 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.886862173 podStartE2EDuration="6.838194046s" podCreationTimestamp="2025-11-25 08:30:17 +0000 UTC" firstStartedPulling="2025-11-25 08:30:18.611485972 +0000 UTC m=+1152.320516767" lastFinishedPulling="2025-11-25 08:30:22.562817855 +0000 UTC m=+1156.271848640" observedRunningTime="2025-11-25 08:30:23.837458665 +0000 UTC m=+1157.546489460" watchObservedRunningTime="2025-11-25 08:30:23.838194046 +0000 UTC m=+1157.547224841" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.415945 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.417598 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-log-httpd\") pod \"0ad91a00-7be1-4543-9def-eac01e503bc7\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461339 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-config-data\") pod \"0ad91a00-7be1-4543-9def-eac01e503bc7\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461386 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-scripts\") pod \"0ad91a00-7be1-4543-9def-eac01e503bc7\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461433 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2422512-0bfe-4e14-be52-d3ced671911b-logs\") pod \"a2422512-0bfe-4e14-be52-d3ced671911b\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461460 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2s8j\" (UniqueName: \"kubernetes.io/projected/0ad91a00-7be1-4543-9def-eac01e503bc7-kube-api-access-b2s8j\") pod \"0ad91a00-7be1-4543-9def-eac01e503bc7\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461531 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-sg-core-conf-yaml\") pod \"0ad91a00-7be1-4543-9def-eac01e503bc7\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-config-data\") pod \"a2422512-0bfe-4e14-be52-d3ced671911b\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461678 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-combined-ca-bundle\") pod \"a2422512-0bfe-4e14-be52-d3ced671911b\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461706 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-run-httpd\") pod \"0ad91a00-7be1-4543-9def-eac01e503bc7\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461733 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4n7n\" (UniqueName: \"kubernetes.io/projected/a2422512-0bfe-4e14-be52-d3ced671911b-kube-api-access-r4n7n\") pod \"a2422512-0bfe-4e14-be52-d3ced671911b\" (UID: \"a2422512-0bfe-4e14-be52-d3ced671911b\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.461759 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-combined-ca-bundle\") pod \"0ad91a00-7be1-4543-9def-eac01e503bc7\" (UID: \"0ad91a00-7be1-4543-9def-eac01e503bc7\") " Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.469467 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2422512-0bfe-4e14-be52-d3ced671911b-logs" (OuterVolumeSpecName: "logs") pod "a2422512-0bfe-4e14-be52-d3ced671911b" (UID: "a2422512-0bfe-4e14-be52-d3ced671911b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.469722 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ad91a00-7be1-4543-9def-eac01e503bc7" (UID: "0ad91a00-7be1-4543-9def-eac01e503bc7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.470580 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ad91a00-7be1-4543-9def-eac01e503bc7" (UID: "0ad91a00-7be1-4543-9def-eac01e503bc7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.476503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2422512-0bfe-4e14-be52-d3ced671911b-kube-api-access-r4n7n" (OuterVolumeSpecName: "kube-api-access-r4n7n") pod "a2422512-0bfe-4e14-be52-d3ced671911b" (UID: "a2422512-0bfe-4e14-be52-d3ced671911b"). InnerVolumeSpecName "kube-api-access-r4n7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.477157 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad91a00-7be1-4543-9def-eac01e503bc7-kube-api-access-b2s8j" (OuterVolumeSpecName: "kube-api-access-b2s8j") pod "0ad91a00-7be1-4543-9def-eac01e503bc7" (UID: "0ad91a00-7be1-4543-9def-eac01e503bc7"). InnerVolumeSpecName "kube-api-access-b2s8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.484598 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-scripts" (OuterVolumeSpecName: "scripts") pod "0ad91a00-7be1-4543-9def-eac01e503bc7" (UID: "0ad91a00-7be1-4543-9def-eac01e503bc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.507423 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ad91a00-7be1-4543-9def-eac01e503bc7" (UID: "0ad91a00-7be1-4543-9def-eac01e503bc7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.543019 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-config-data" (OuterVolumeSpecName: "config-data") pod "a2422512-0bfe-4e14-be52-d3ced671911b" (UID: "a2422512-0bfe-4e14-be52-d3ced671911b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.565660 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4n7n\" (UniqueName: \"kubernetes.io/projected/a2422512-0bfe-4e14-be52-d3ced671911b-kube-api-access-r4n7n\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.565873 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.566115 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.566491 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2422512-0bfe-4e14-be52-d3ced671911b-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.566606 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2s8j\" (UniqueName: \"kubernetes.io/projected/0ad91a00-7be1-4543-9def-eac01e503bc7-kube-api-access-b2s8j\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.566633 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.566663 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.566677 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a00-7be1-4543-9def-eac01e503bc7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.571737 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2422512-0bfe-4e14-be52-d3ced671911b" (UID: "a2422512-0bfe-4e14-be52-d3ced671911b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.601989 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ad91a00-7be1-4543-9def-eac01e503bc7" (UID: "0ad91a00-7be1-4543-9def-eac01e503bc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.628615 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-config-data" (OuterVolumeSpecName: "config-data") pod "0ad91a00-7be1-4543-9def-eac01e503bc7" (UID: "0ad91a00-7be1-4543-9def-eac01e503bc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.668887 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.668924 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a00-7be1-4543-9def-eac01e503bc7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.668937 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2422512-0bfe-4e14-be52-d3ced671911b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.772551 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ad91a00-7be1-4543-9def-eac01e503bc7","Type":"ContainerDied","Data":"6b26f0ca4c78bd2a5810e89fbd0580c58a44bc43b05227c382244fa455136153"} Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.772627 4760 scope.go:117] "RemoveContainer" containerID="d6119cb9cc6cd9c37368527c27bf0cdbc1b1536636c19d3475a7ab18b2c1c1e0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.772784 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.776627 4760 generic.go:334] "Generic (PLEG): container finished" podID="a2422512-0bfe-4e14-be52-d3ced671911b" containerID="9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a" exitCode=0 Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.776663 4760 generic.go:334] "Generic (PLEG): container finished" podID="a2422512-0bfe-4e14-be52-d3ced671911b" containerID="4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73" exitCode=143 Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.776724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2422512-0bfe-4e14-be52-d3ced671911b","Type":"ContainerDied","Data":"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a"} Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.776755 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2422512-0bfe-4e14-be52-d3ced671911b","Type":"ContainerDied","Data":"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73"} Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.776769 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2422512-0bfe-4e14-be52-d3ced671911b","Type":"ContainerDied","Data":"29110523d44cf4de8e8a766e83815f848d12a8e951fd84f74da282dce4b6402c"} Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.777155 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.798707 4760 scope.go:117] "RemoveContainer" containerID="fdea27ee4298a4fc35f6b6c4ff6e9838adaddfa93f4df5c6643bf911d5ad65b8" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.815606 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.825872 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.834431 4760 scope.go:117] "RemoveContainer" containerID="83b55ac537070eaead07855c0b03320cc501615093b44933acbe219cde306dc0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.844454 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.869120 4760 scope.go:117] "RemoveContainer" containerID="e9f27c864347d81879449c4a65f13a5c7477167b6dfeb9748e9a5aa8347e6cb4" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.871764 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.892502 4760 scope.go:117] "RemoveContainer" containerID="9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.895821 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.896186 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="sg-core" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896200 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="sg-core" Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.896213 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-metadata" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896219 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-metadata" Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.896232 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="proxy-httpd" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896240 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="proxy-httpd" Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.896275 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-notification-agent" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896282 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-notification-agent" Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.896300 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-log" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896306 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-log" Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.896320 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-central-agent" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896326 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-central-agent" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896516 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-metadata" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896528 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-notification-agent" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896548 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" containerName="nova-metadata-log" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896572 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="sg-core" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896587 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="ceilometer-central-agent" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.896601 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" containerName="proxy-httpd" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.898464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.899858 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.900086 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.900315 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.904199 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.909275 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.910884 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.910966 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.914697 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.921967 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.930500 4760 scope.go:117] "RemoveContainer" containerID="4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.949546 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad91a00-7be1-4543-9def-eac01e503bc7" path="/var/lib/kubelet/pods/0ad91a00-7be1-4543-9def-eac01e503bc7/volumes" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.950350 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2422512-0bfe-4e14-be52-d3ced671911b" path="/var/lib/kubelet/pods/a2422512-0bfe-4e14-be52-d3ced671911b/volumes" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.952838 4760 scope.go:117] "RemoveContainer" containerID="9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a" Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.953152 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a\": container with ID starting with 9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a not found: ID does not exist" containerID="9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.953173 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a"} err="failed to get container status \"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a\": rpc error: code = NotFound desc = could not find container \"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a\": container with ID starting with 9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a not found: ID does not exist" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.953190 4760 scope.go:117] "RemoveContainer" containerID="4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73" Nov 25 08:30:24 crc kubenswrapper[4760]: E1125 08:30:24.953438 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73\": container with ID starting with 4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73 not found: ID does not exist" containerID="4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.953455 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73"} err="failed to get container status \"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73\": rpc error: code = NotFound desc = could not find container \"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73\": container with ID starting with 4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73 not found: ID does not exist" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.953469 4760 scope.go:117] "RemoveContainer" containerID="9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.953664 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a"} err="failed to get container status \"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a\": rpc error: code = NotFound desc = could not find container \"9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a\": container with ID starting with 9c257dd7b10fe0d638f021e1ba9b31b8b9b4b245f6be0defcd8c023d2866c36a not found: ID does not exist" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.953688 4760 scope.go:117] "RemoveContainer" containerID="4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.953944 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73"} err="failed to get container status \"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73\": rpc error: code = NotFound desc = could not find container \"4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73\": container with ID starting with 4246c3c43ac083c89e2a3efb0bab1057b4853095f2de40b48bcd81c99a8d4e73 not found: ID does not exist" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-scripts\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974329 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzqtc\" (UniqueName: \"kubernetes.io/projected/99f48473-9869-4203-8ca5-6288b177ba0c-kube-api-access-pzqtc\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974590 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqq8w\" (UniqueName: \"kubernetes.io/projected/2b8a311c-357e-41f5-9973-6d4f966f96af-kube-api-access-cqq8w\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-config-data\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-log-httpd\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974796 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-config-data\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974824 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f48473-9869-4203-8ca5-6288b177ba0c-logs\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:24 crc kubenswrapper[4760]: I1125 08:30:24.974932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-run-httpd\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.076651 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f48473-9869-4203-8ca5-6288b177ba0c-logs\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.076727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-run-httpd\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.076763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-scripts\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.077437 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.077846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.077369 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-run-httpd\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.077207 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f48473-9869-4203-8ca5-6288b177ba0c-logs\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.077895 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzqtc\" (UniqueName: \"kubernetes.io/projected/99f48473-9869-4203-8ca5-6288b177ba0c-kube-api-access-pzqtc\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078023 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqq8w\" (UniqueName: \"kubernetes.io/projected/2b8a311c-357e-41f5-9973-6d4f966f96af-kube-api-access-cqq8w\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078107 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-config-data\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-log-httpd\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-config-data\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078219 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.078735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-log-httpd\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.082262 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.082693 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.082797 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-scripts\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.083149 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.084619 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-config-data\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.091531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.091735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.093143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-config-data\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.094723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqq8w\" (UniqueName: \"kubernetes.io/projected/2b8a311c-357e-41f5-9973-6d4f966f96af-kube-api-access-cqq8w\") pod \"ceilometer-0\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.096389 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzqtc\" (UniqueName: \"kubernetes.io/projected/99f48473-9869-4203-8ca5-6288b177ba0c-kube-api-access-pzqtc\") pod \"nova-metadata-0\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.233528 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.248748 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.714318 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:30:25 crc kubenswrapper[4760]: W1125 08:30:25.714938 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b8a311c_357e_41f5_9973_6d4f966f96af.slice/crio-fd7368c0ee6f5e9732fd1c0398575d9e26bb582db4967de7a10066fda472edb1 WatchSource:0}: Error finding container fd7368c0ee6f5e9732fd1c0398575d9e26bb582db4967de7a10066fda472edb1: Status 404 returned error can't find the container with id fd7368c0ee6f5e9732fd1c0398575d9e26bb582db4967de7a10066fda472edb1 Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.791532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerStarted","Data":"fd7368c0ee6f5e9732fd1c0398575d9e26bb582db4967de7a10066fda472edb1"} Nov 25 08:30:25 crc kubenswrapper[4760]: I1125 08:30:25.847058 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:26 crc kubenswrapper[4760]: I1125 08:30:26.838048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerStarted","Data":"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d"} Nov 25 08:30:26 crc kubenswrapper[4760]: I1125 08:30:26.840638 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99f48473-9869-4203-8ca5-6288b177ba0c","Type":"ContainerStarted","Data":"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e"} Nov 25 08:30:26 crc kubenswrapper[4760]: I1125 08:30:26.840741 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99f48473-9869-4203-8ca5-6288b177ba0c","Type":"ContainerStarted","Data":"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e"} Nov 25 08:30:26 crc kubenswrapper[4760]: I1125 08:30:26.840822 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99f48473-9869-4203-8ca5-6288b177ba0c","Type":"ContainerStarted","Data":"663375dbdda8012bd0dd7ece70de1929ddb92474dc0004e66489e69cb76515f6"} Nov 25 08:30:26 crc kubenswrapper[4760]: I1125 08:30:26.990052 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.990026896 podStartE2EDuration="2.990026896s" podCreationTimestamp="2025-11-25 08:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:26.856622893 +0000 UTC m=+1160.565653708" watchObservedRunningTime="2025-11-25 08:30:26.990026896 +0000 UTC m=+1160.699057681" Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.774521 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.774814 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.812716 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.812793 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.845876 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.852150 4760 generic.go:334] "Generic (PLEG): container finished" podID="152d5f92-3188-4d96-8594-455aacbb0e4a" containerID="0fc31a4aea11467b98541fdd66687da138fb33c403417abf6c44e3d343da5fce" exitCode=0 Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.852219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zv989" event={"ID":"152d5f92-3188-4d96-8594-455aacbb0e4a","Type":"ContainerDied","Data":"0fc31a4aea11467b98541fdd66687da138fb33c403417abf6c44e3d343da5fce"} Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.854521 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerStarted","Data":"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b"} Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.854559 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerStarted","Data":"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0"} Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.895922 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 08:30:27 crc kubenswrapper[4760]: I1125 08:30:27.918595 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.186583 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.249194 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-775457b975-8dft4"] Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.249539 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-775457b975-8dft4" podUID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerName="dnsmasq-dns" containerID="cri-o://98cd0eb2943555555085d3ee8dd81577d5bbfc87745d36e244837ce8b55fbb67" gracePeriod=10 Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.858778 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.859120 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.882343 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.883892 4760 generic.go:334] "Generic (PLEG): container finished" podID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerID="98cd0eb2943555555085d3ee8dd81577d5bbfc87745d36e244837ce8b55fbb67" exitCode=0 Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.883968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775457b975-8dft4" event={"ID":"62f5081a-73e1-49f7-ac0a-d42c5271b6ba","Type":"ContainerDied","Data":"98cd0eb2943555555085d3ee8dd81577d5bbfc87745d36e244837ce8b55fbb67"} Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.883995 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775457b975-8dft4" event={"ID":"62f5081a-73e1-49f7-ac0a-d42c5271b6ba","Type":"ContainerDied","Data":"d6b502d9f0f9b9629e55c631ce9b1cc5fc3f7e89a4bbdadd5d77cd9cff0f6989"} Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.884006 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6b502d9f0f9b9629e55c631ce9b1cc5fc3f7e89a4bbdadd5d77cd9cff0f6989" Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.891445 4760 generic.go:334] "Generic (PLEG): container finished" podID="c44f13d4-c189-4609-944a-3dbaaee53e6b" containerID="b28632d12bc38d13a25dc1f56ef8f3c8e1dc901574857179c4ed50b4a6e4276b" exitCode=0 Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.892182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" event={"ID":"c44f13d4-c189-4609-944a-3dbaaee53e6b","Type":"ContainerDied","Data":"b28632d12bc38d13a25dc1f56ef8f3c8e1dc901574857179c4ed50b4a6e4276b"} Nov 25 08:30:28 crc kubenswrapper[4760]: I1125 08:30:28.947512 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.048932 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-dns-svc\") pod \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.049154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-config\") pod \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.049423 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-sb\") pod \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.049583 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l729g\" (UniqueName: \"kubernetes.io/projected/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-kube-api-access-l729g\") pod \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.049786 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-nb\") pod \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\" (UID: \"62f5081a-73e1-49f7-ac0a-d42c5271b6ba\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.079459 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-kube-api-access-l729g" (OuterVolumeSpecName: "kube-api-access-l729g") pod "62f5081a-73e1-49f7-ac0a-d42c5271b6ba" (UID: "62f5081a-73e1-49f7-ac0a-d42c5271b6ba"). InnerVolumeSpecName "kube-api-access-l729g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.112503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62f5081a-73e1-49f7-ac0a-d42c5271b6ba" (UID: "62f5081a-73e1-49f7-ac0a-d42c5271b6ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.113717 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62f5081a-73e1-49f7-ac0a-d42c5271b6ba" (UID: "62f5081a-73e1-49f7-ac0a-d42c5271b6ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.124822 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-config" (OuterVolumeSpecName: "config") pod "62f5081a-73e1-49f7-ac0a-d42c5271b6ba" (UID: "62f5081a-73e1-49f7-ac0a-d42c5271b6ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.152803 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62f5081a-73e1-49f7-ac0a-d42c5271b6ba" (UID: "62f5081a-73e1-49f7-ac0a-d42c5271b6ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.168815 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.168896 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l729g\" (UniqueName: \"kubernetes.io/projected/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-kube-api-access-l729g\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.168915 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.168929 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.168939 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f5081a-73e1-49f7-ac0a-d42c5271b6ba-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.231644 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.271127 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-combined-ca-bundle\") pod \"152d5f92-3188-4d96-8594-455aacbb0e4a\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.271233 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-config-data\") pod \"152d5f92-3188-4d96-8594-455aacbb0e4a\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.271284 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-scripts\") pod \"152d5f92-3188-4d96-8594-455aacbb0e4a\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.271417 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h49mf\" (UniqueName: \"kubernetes.io/projected/152d5f92-3188-4d96-8594-455aacbb0e4a-kube-api-access-h49mf\") pod \"152d5f92-3188-4d96-8594-455aacbb0e4a\" (UID: \"152d5f92-3188-4d96-8594-455aacbb0e4a\") " Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.274549 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152d5f92-3188-4d96-8594-455aacbb0e4a-kube-api-access-h49mf" (OuterVolumeSpecName: "kube-api-access-h49mf") pod "152d5f92-3188-4d96-8594-455aacbb0e4a" (UID: "152d5f92-3188-4d96-8594-455aacbb0e4a"). InnerVolumeSpecName "kube-api-access-h49mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.277821 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-scripts" (OuterVolumeSpecName: "scripts") pod "152d5f92-3188-4d96-8594-455aacbb0e4a" (UID: "152d5f92-3188-4d96-8594-455aacbb0e4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.310385 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "152d5f92-3188-4d96-8594-455aacbb0e4a" (UID: "152d5f92-3188-4d96-8594-455aacbb0e4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.311001 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-config-data" (OuterVolumeSpecName: "config-data") pod "152d5f92-3188-4d96-8594-455aacbb0e4a" (UID: "152d5f92-3188-4d96-8594-455aacbb0e4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.372932 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.372958 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.372967 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h49mf\" (UniqueName: \"kubernetes.io/projected/152d5f92-3188-4d96-8594-455aacbb0e4a-kube-api-access-h49mf\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.372976 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152d5f92-3188-4d96-8594-455aacbb0e4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.905860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zv989" event={"ID":"152d5f92-3188-4d96-8594-455aacbb0e4a","Type":"ContainerDied","Data":"1026068d3640db6f9cc8f004e4d794ef1163057892588beb9c2dde9b55c6a158"} Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.906701 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1026068d3640db6f9cc8f004e4d794ef1163057892588beb9c2dde9b55c6a158" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.905909 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zv989" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.911378 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerStarted","Data":"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b"} Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.911491 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775457b975-8dft4" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.914509 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.964234 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.980582954 podStartE2EDuration="5.964213229s" podCreationTimestamp="2025-11-25 08:30:24 +0000 UTC" firstStartedPulling="2025-11-25 08:30:25.718525626 +0000 UTC m=+1159.427556421" lastFinishedPulling="2025-11-25 08:30:28.702155901 +0000 UTC m=+1162.411186696" observedRunningTime="2025-11-25 08:30:29.945602023 +0000 UTC m=+1163.654632818" watchObservedRunningTime="2025-11-25 08:30:29.964213229 +0000 UTC m=+1163.673244024" Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.989548 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-775457b975-8dft4"] Nov 25 08:30:29 crc kubenswrapper[4760]: I1125 08:30:29.996595 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-775457b975-8dft4"] Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.102776 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.103382 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-log" containerID="cri-o://3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265" gracePeriod=30 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.103483 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-api" containerID="cri-o://2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b" gracePeriod=30 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.126878 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.127102 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fd9e4283-c692-42e6-9205-d00799923720" containerName="nova-scheduler-scheduler" containerID="cri-o://9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" gracePeriod=30 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.198889 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.201947 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-log" containerID="cri-o://ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e" gracePeriod=30 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.202431 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-metadata" containerID="cri-o://7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e" gracePeriod=30 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.249805 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.249856 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.315763 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.396087 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-config-data\") pod \"c44f13d4-c189-4609-944a-3dbaaee53e6b\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.396270 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-combined-ca-bundle\") pod \"c44f13d4-c189-4609-944a-3dbaaee53e6b\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.396370 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmjj2\" (UniqueName: \"kubernetes.io/projected/c44f13d4-c189-4609-944a-3dbaaee53e6b-kube-api-access-mmjj2\") pod \"c44f13d4-c189-4609-944a-3dbaaee53e6b\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.396424 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-scripts\") pod \"c44f13d4-c189-4609-944a-3dbaaee53e6b\" (UID: \"c44f13d4-c189-4609-944a-3dbaaee53e6b\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.402181 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-scripts" (OuterVolumeSpecName: "scripts") pod "c44f13d4-c189-4609-944a-3dbaaee53e6b" (UID: "c44f13d4-c189-4609-944a-3dbaaee53e6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.403963 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44f13d4-c189-4609-944a-3dbaaee53e6b-kube-api-access-mmjj2" (OuterVolumeSpecName: "kube-api-access-mmjj2") pod "c44f13d4-c189-4609-944a-3dbaaee53e6b" (UID: "c44f13d4-c189-4609-944a-3dbaaee53e6b"). InnerVolumeSpecName "kube-api-access-mmjj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.430594 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-config-data" (OuterVolumeSpecName: "config-data") pod "c44f13d4-c189-4609-944a-3dbaaee53e6b" (UID: "c44f13d4-c189-4609-944a-3dbaaee53e6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.443076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c44f13d4-c189-4609-944a-3dbaaee53e6b" (UID: "c44f13d4-c189-4609-944a-3dbaaee53e6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.498502 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.498537 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmjj2\" (UniqueName: \"kubernetes.io/projected/c44f13d4-c189-4609-944a-3dbaaee53e6b-kube-api-access-mmjj2\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.498550 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.498560 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44f13d4-c189-4609-944a-3dbaaee53e6b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.715751 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.915205 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f48473-9869-4203-8ca5-6288b177ba0c-logs\") pod \"99f48473-9869-4203-8ca5-6288b177ba0c\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.915726 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f48473-9869-4203-8ca5-6288b177ba0c-logs" (OuterVolumeSpecName: "logs") pod "99f48473-9869-4203-8ca5-6288b177ba0c" (UID: "99f48473-9869-4203-8ca5-6288b177ba0c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.916163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzqtc\" (UniqueName: \"kubernetes.io/projected/99f48473-9869-4203-8ca5-6288b177ba0c-kube-api-access-pzqtc\") pod \"99f48473-9869-4203-8ca5-6288b177ba0c\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.916186 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-config-data\") pod \"99f48473-9869-4203-8ca5-6288b177ba0c\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.916239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-nova-metadata-tls-certs\") pod \"99f48473-9869-4203-8ca5-6288b177ba0c\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.916289 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-combined-ca-bundle\") pod \"99f48473-9869-4203-8ca5-6288b177ba0c\" (UID: \"99f48473-9869-4203-8ca5-6288b177ba0c\") " Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.916701 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f48473-9869-4203-8ca5-6288b177ba0c-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.931538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f48473-9869-4203-8ca5-6288b177ba0c-kube-api-access-pzqtc" (OuterVolumeSpecName: "kube-api-access-pzqtc") pod "99f48473-9869-4203-8ca5-6288b177ba0c" (UID: "99f48473-9869-4203-8ca5-6288b177ba0c"). InnerVolumeSpecName "kube-api-access-pzqtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.933331 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" event={"ID":"c44f13d4-c189-4609-944a-3dbaaee53e6b","Type":"ContainerDied","Data":"a5dba49b7749d51beeb743a98450d1b43c66bf6f9cba701730b2093bcd3f233c"} Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.933501 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5dba49b7749d51beeb743a98450d1b43c66bf6f9cba701730b2093bcd3f233c" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.933639 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.952555 4760 generic.go:334] "Generic (PLEG): container finished" podID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerID="3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265" exitCode=143 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.967543 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99f48473-9869-4203-8ca5-6288b177ba0c" (UID: "99f48473-9869-4203-8ca5-6288b177ba0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.978984 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" path="/var/lib/kubelet/pods/62f5081a-73e1-49f7-ac0a-d42c5271b6ba/volumes" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.981549 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-config-data" (OuterVolumeSpecName: "config-data") pod "99f48473-9869-4203-8ca5-6288b177ba0c" (UID: "99f48473-9869-4203-8ca5-6288b177ba0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.981723 4760 generic.go:334] "Generic (PLEG): container finished" podID="99f48473-9869-4203-8ca5-6288b177ba0c" containerID="7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e" exitCode=0 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.981865 4760 generic.go:334] "Generic (PLEG): container finished" podID="99f48473-9869-4203-8ca5-6288b177ba0c" containerID="ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e" exitCode=143 Nov 25 08:30:30 crc kubenswrapper[4760]: I1125 08:30:30.981852 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.013990 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "99f48473-9869-4203-8ca5-6288b177ba0c" (UID: "99f48473-9869-4203-8ca5-6288b177ba0c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.020909 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.020938 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.020949 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzqtc\" (UniqueName: \"kubernetes.io/projected/99f48473-9869-4203-8ca5-6288b177ba0c-kube-api-access-pzqtc\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.020958 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f48473-9869-4203-8ca5-6288b177ba0c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.072567 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.073028 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-log" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073046 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-log" Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.073070 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44f13d4-c189-4609-944a-3dbaaee53e6b" containerName="nova-cell1-conductor-db-sync" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073077 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44f13d4-c189-4609-944a-3dbaaee53e6b" containerName="nova-cell1-conductor-db-sync" Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.073099 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-metadata" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073106 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-metadata" Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.073116 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerName="dnsmasq-dns" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073155 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerName="dnsmasq-dns" Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.073164 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152d5f92-3188-4d96-8594-455aacbb0e4a" containerName="nova-manage" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073170 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="152d5f92-3188-4d96-8594-455aacbb0e4a" containerName="nova-manage" Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.073185 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerName="init" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073191 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerName="init" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073373 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-log" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073390 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" containerName="nova-metadata-metadata" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073397 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44f13d4-c189-4609-944a-3dbaaee53e6b" containerName="nova-cell1-conductor-db-sync" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073406 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f5081a-73e1-49f7-ac0a-d42c5271b6ba" containerName="dnsmasq-dns" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.073423 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="152d5f92-3188-4d96-8594-455aacbb0e4a" containerName="nova-manage" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.074041 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.074067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863b3b4b-8314-4de5-9f3d-29b7028dbd6e","Type":"ContainerDied","Data":"3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265"} Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.074193 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99f48473-9869-4203-8ca5-6288b177ba0c","Type":"ContainerDied","Data":"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e"} Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.074258 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99f48473-9869-4203-8ca5-6288b177ba0c","Type":"ContainerDied","Data":"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e"} Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.074272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99f48473-9869-4203-8ca5-6288b177ba0c","Type":"ContainerDied","Data":"663375dbdda8012bd0dd7ece70de1929ddb92474dc0004e66489e69cb76515f6"} Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.074215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.074287 4760 scope.go:117] "RemoveContainer" containerID="7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.077042 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.099531 4760 scope.go:117] "RemoveContainer" containerID="ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.123739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrbl\" (UniqueName: \"kubernetes.io/projected/db562c11-b116-4a44-9506-ef67f5211979-kube-api-access-srrbl\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.124127 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db562c11-b116-4a44-9506-ef67f5211979-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.124324 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db562c11-b116-4a44-9506-ef67f5211979-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.128920 4760 scope.go:117] "RemoveContainer" containerID="7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e" Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.132458 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e\": container with ID starting with 7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e not found: ID does not exist" containerID="7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.132529 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e"} err="failed to get container status \"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e\": rpc error: code = NotFound desc = could not find container \"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e\": container with ID starting with 7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e not found: ID does not exist" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.132561 4760 scope.go:117] "RemoveContainer" containerID="ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e" Nov 25 08:30:31 crc kubenswrapper[4760]: E1125 08:30:31.136445 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e\": container with ID starting with ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e not found: ID does not exist" containerID="ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.136525 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e"} err="failed to get container status \"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e\": rpc error: code = NotFound desc = could not find container \"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e\": container with ID starting with ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e not found: ID does not exist" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.136559 4760 scope.go:117] "RemoveContainer" containerID="7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.137918 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e"} err="failed to get container status \"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e\": rpc error: code = NotFound desc = could not find container \"7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e\": container with ID starting with 7e9c49f6ac1fb439bcfa5766e7adc2e382a885237afb5c0ba10c9db3a7e25c2e not found: ID does not exist" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.137949 4760 scope.go:117] "RemoveContainer" containerID="ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.138396 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e"} err="failed to get container status \"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e\": rpc error: code = NotFound desc = could not find container \"ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e\": container with ID starting with ab1027238d9b27df3ad532c81af2cafe00ee3a4050951191b9d58168ca98621e not found: ID does not exist" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.226295 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db562c11-b116-4a44-9506-ef67f5211979-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.226368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrbl\" (UniqueName: \"kubernetes.io/projected/db562c11-b116-4a44-9506-ef67f5211979-kube-api-access-srrbl\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.226447 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db562c11-b116-4a44-9506-ef67f5211979-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.232171 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db562c11-b116-4a44-9506-ef67f5211979-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.232176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db562c11-b116-4a44-9506-ef67f5211979-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.248749 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrbl\" (UniqueName: \"kubernetes.io/projected/db562c11-b116-4a44-9506-ef67f5211979-kube-api-access-srrbl\") pod \"nova-cell1-conductor-0\" (UID: \"db562c11-b116-4a44-9506-ef67f5211979\") " pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.342181 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.350264 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.363998 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.365875 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.367499 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.373059 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.389850 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.415952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.429634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.429709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.429739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2nx2\" (UniqueName: \"kubernetes.io/projected/5468c668-1624-46e5-964e-d1cdb1f47ab8-kube-api-access-n2nx2\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.429766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5468c668-1624-46e5-964e-d1cdb1f47ab8-logs\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.430046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-config-data\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.531687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-config-data\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.532010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.532051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.532075 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2nx2\" (UniqueName: \"kubernetes.io/projected/5468c668-1624-46e5-964e-d1cdb1f47ab8-kube-api-access-n2nx2\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.532101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5468c668-1624-46e5-964e-d1cdb1f47ab8-logs\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.532534 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5468c668-1624-46e5-964e-d1cdb1f47ab8-logs\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.539385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.540916 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-config-data\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.544019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.572528 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2nx2\" (UniqueName: \"kubernetes.io/projected/5468c668-1624-46e5-964e-d1cdb1f47ab8-kube-api-access-n2nx2\") pod \"nova-metadata-0\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.686326 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.865707 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 08:30:31 crc kubenswrapper[4760]: I1125 08:30:31.991344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db562c11-b116-4a44-9506-ef67f5211979","Type":"ContainerStarted","Data":"5a00c365c76298f8280617e6cefcc1103bdfc9dfb54b167917206c8ddbbb2867"} Nov 25 08:30:32 crc kubenswrapper[4760]: I1125 08:30:32.137538 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:30:32 crc kubenswrapper[4760]: W1125 08:30:32.146589 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5468c668_1624_46e5_964e_d1cdb1f47ab8.slice/crio-65077cdb6c24dea1ddccc60e4deef75cb8929747e92d3406038239ecc60d707a WatchSource:0}: Error finding container 65077cdb6c24dea1ddccc60e4deef75cb8929747e92d3406038239ecc60d707a: Status 404 returned error can't find the container with id 65077cdb6c24dea1ddccc60e4deef75cb8929747e92d3406038239ecc60d707a Nov 25 08:30:32 crc kubenswrapper[4760]: E1125 08:30:32.817458 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 08:30:32 crc kubenswrapper[4760]: E1125 08:30:32.819443 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 08:30:32 crc kubenswrapper[4760]: E1125 08:30:32.821057 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 08:30:32 crc kubenswrapper[4760]: E1125 08:30:32.821097 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fd9e4283-c692-42e6-9205-d00799923720" containerName="nova-scheduler-scheduler" Nov 25 08:30:32 crc kubenswrapper[4760]: I1125 08:30:32.950790 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f48473-9869-4203-8ca5-6288b177ba0c" path="/var/lib/kubelet/pods/99f48473-9869-4203-8ca5-6288b177ba0c/volumes" Nov 25 08:30:33 crc kubenswrapper[4760]: I1125 08:30:33.001859 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db562c11-b116-4a44-9506-ef67f5211979","Type":"ContainerStarted","Data":"d99b758f979cea92da1e5c6c4307ebacb05c5861791f27e368efda8d4e72af4e"} Nov 25 08:30:33 crc kubenswrapper[4760]: I1125 08:30:33.003048 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:33 crc kubenswrapper[4760]: I1125 08:30:33.005594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5468c668-1624-46e5-964e-d1cdb1f47ab8","Type":"ContainerStarted","Data":"8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7"} Nov 25 08:30:33 crc kubenswrapper[4760]: I1125 08:30:33.005647 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5468c668-1624-46e5-964e-d1cdb1f47ab8","Type":"ContainerStarted","Data":"604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51"} Nov 25 08:30:33 crc kubenswrapper[4760]: I1125 08:30:33.005663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5468c668-1624-46e5-964e-d1cdb1f47ab8","Type":"ContainerStarted","Data":"65077cdb6c24dea1ddccc60e4deef75cb8929747e92d3406038239ecc60d707a"} Nov 25 08:30:33 crc kubenswrapper[4760]: I1125 08:30:33.027570 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.02754845 podStartE2EDuration="3.02754845s" podCreationTimestamp="2025-11-25 08:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:33.022925587 +0000 UTC m=+1166.731956402" watchObservedRunningTime="2025-11-25 08:30:33.02754845 +0000 UTC m=+1166.736579255" Nov 25 08:30:33 crc kubenswrapper[4760]: I1125 08:30:33.044911 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.04489638 podStartE2EDuration="2.04489638s" podCreationTimestamp="2025-11-25 08:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:33.042021287 +0000 UTC m=+1166.751052082" watchObservedRunningTime="2025-11-25 08:30:33.04489638 +0000 UTC m=+1166.753927175" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.785587 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.824209 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf252\" (UniqueName: \"kubernetes.io/projected/fd9e4283-c692-42e6-9205-d00799923720-kube-api-access-mf252\") pod \"fd9e4283-c692-42e6-9205-d00799923720\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.824352 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-config-data\") pod \"fd9e4283-c692-42e6-9205-d00799923720\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.824507 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-combined-ca-bundle\") pod \"fd9e4283-c692-42e6-9205-d00799923720\" (UID: \"fd9e4283-c692-42e6-9205-d00799923720\") " Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.832241 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9e4283-c692-42e6-9205-d00799923720-kube-api-access-mf252" (OuterVolumeSpecName: "kube-api-access-mf252") pod "fd9e4283-c692-42e6-9205-d00799923720" (UID: "fd9e4283-c692-42e6-9205-d00799923720"). InnerVolumeSpecName "kube-api-access-mf252". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.878230 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-config-data" (OuterVolumeSpecName: "config-data") pod "fd9e4283-c692-42e6-9205-d00799923720" (UID: "fd9e4283-c692-42e6-9205-d00799923720"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.883233 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd9e4283-c692-42e6-9205-d00799923720" (UID: "fd9e4283-c692-42e6-9205-d00799923720"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.903512 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.925998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-logs\") pod \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.926137 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nwwm\" (UniqueName: \"kubernetes.io/projected/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-kube-api-access-7nwwm\") pod \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.926177 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-combined-ca-bundle\") pod \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.926195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-config-data\") pod \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\" (UID: \"863b3b4b-8314-4de5-9f3d-29b7028dbd6e\") " Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.926645 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-logs" (OuterVolumeSpecName: "logs") pod "863b3b4b-8314-4de5-9f3d-29b7028dbd6e" (UID: "863b3b4b-8314-4de5-9f3d-29b7028dbd6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.926734 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.926748 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf252\" (UniqueName: \"kubernetes.io/projected/fd9e4283-c692-42e6-9205-d00799923720-kube-api-access-mf252\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.926759 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9e4283-c692-42e6-9205-d00799923720-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.929883 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-kube-api-access-7nwwm" (OuterVolumeSpecName: "kube-api-access-7nwwm") pod "863b3b4b-8314-4de5-9f3d-29b7028dbd6e" (UID: "863b3b4b-8314-4de5-9f3d-29b7028dbd6e"). InnerVolumeSpecName "kube-api-access-7nwwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.951280 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "863b3b4b-8314-4de5-9f3d-29b7028dbd6e" (UID: "863b3b4b-8314-4de5-9f3d-29b7028dbd6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:34 crc kubenswrapper[4760]: I1125 08:30:34.963550 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-config-data" (OuterVolumeSpecName: "config-data") pod "863b3b4b-8314-4de5-9f3d-29b7028dbd6e" (UID: "863b3b4b-8314-4de5-9f3d-29b7028dbd6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.021564 4760 generic.go:334] "Generic (PLEG): container finished" podID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerID="2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b" exitCode=0 Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.021625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863b3b4b-8314-4de5-9f3d-29b7028dbd6e","Type":"ContainerDied","Data":"2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b"} Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.021650 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"863b3b4b-8314-4de5-9f3d-29b7028dbd6e","Type":"ContainerDied","Data":"75083a3bf7a434aaf9df366fea5abbd5528933a557004b6e3d9b235263259f94"} Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.021666 4760 scope.go:117] "RemoveContainer" containerID="2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.021816 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.025266 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd9e4283-c692-42e6-9205-d00799923720" containerID="9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" exitCode=0 Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.025296 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.025314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd9e4283-c692-42e6-9205-d00799923720","Type":"ContainerDied","Data":"9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0"} Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.025368 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd9e4283-c692-42e6-9205-d00799923720","Type":"ContainerDied","Data":"a396103881395577049753106d03116faca4ae1c5284a32e5846fbd5f805f186"} Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.028118 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.028153 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nwwm\" (UniqueName: \"kubernetes.io/projected/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-kube-api-access-7nwwm\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.028166 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.028176 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b3b4b-8314-4de5-9f3d-29b7028dbd6e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.064426 4760 scope.go:117] "RemoveContainer" containerID="3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.080523 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.101484 4760 scope.go:117] "RemoveContainer" containerID="2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b" Nov 25 08:30:35 crc kubenswrapper[4760]: E1125 08:30:35.102391 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b\": container with ID starting with 2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b not found: ID does not exist" containerID="2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.102478 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b"} err="failed to get container status \"2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b\": rpc error: code = NotFound desc = could not find container \"2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b\": container with ID starting with 2b5a594e1e576f4d0d6bd1ceb4ee34f928d4c276f8fb49986ddec2b9aa1c092b not found: ID does not exist" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.102525 4760 scope.go:117] "RemoveContainer" containerID="3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265" Nov 25 08:30:35 crc kubenswrapper[4760]: E1125 08:30:35.103390 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265\": container with ID starting with 3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265 not found: ID does not exist" containerID="3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.103417 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265"} err="failed to get container status \"3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265\": rpc error: code = NotFound desc = could not find container \"3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265\": container with ID starting with 3683064c85645c04d7a3bb4ffc6ff4e474dc4f5591cb84af56265a2a58279265 not found: ID does not exist" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.103432 4760 scope.go:117] "RemoveContainer" containerID="9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.107779 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.120435 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: E1125 08:30:35.120837 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-api" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.120850 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-api" Nov 25 08:30:35 crc kubenswrapper[4760]: E1125 08:30:35.120863 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9e4283-c692-42e6-9205-d00799923720" containerName="nova-scheduler-scheduler" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.120869 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9e4283-c692-42e6-9205-d00799923720" containerName="nova-scheduler-scheduler" Nov 25 08:30:35 crc kubenswrapper[4760]: E1125 08:30:35.120877 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-log" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.120884 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-log" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.121045 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9e4283-c692-42e6-9205-d00799923720" containerName="nova-scheduler-scheduler" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.121063 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-log" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.121073 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" containerName="nova-api-api" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.122181 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.128408 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.130144 4760 scope.go:117] "RemoveContainer" containerID="9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" Nov 25 08:30:35 crc kubenswrapper[4760]: E1125 08:30:35.131432 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0\": container with ID starting with 9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0 not found: ID does not exist" containerID="9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.131476 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0"} err="failed to get container status \"9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0\": rpc error: code = NotFound desc = could not find container \"9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0\": container with ID starting with 9a857a7d12f7cf9c347e5b3c4f47504f04cf6abf61768f1e7c5be955a9b784d0 not found: ID does not exist" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.142475 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.149697 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.156612 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.162754 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.164221 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.165961 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.183120 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.230761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx62z\" (UniqueName: \"kubernetes.io/projected/89af48c1-24c2-4f59-a5f2-574e06417973-kube-api-access-fx62z\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.230820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.230941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ljk\" (UniqueName: \"kubernetes.io/projected/f972542f-a24d-4356-b5e1-2c3bbb87872f-kube-api-access-n7ljk\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.230965 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-config-data\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.230999 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.231024 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-config-data\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.231063 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89af48c1-24c2-4f59-a5f2-574e06417973-logs\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.333276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7ljk\" (UniqueName: \"kubernetes.io/projected/f972542f-a24d-4356-b5e1-2c3bbb87872f-kube-api-access-n7ljk\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.333332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-config-data\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.333370 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.333396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-config-data\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.333436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89af48c1-24c2-4f59-a5f2-574e06417973-logs\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.333467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx62z\" (UniqueName: \"kubernetes.io/projected/89af48c1-24c2-4f59-a5f2-574e06417973-kube-api-access-fx62z\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.333485 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.334147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89af48c1-24c2-4f59-a5f2-574e06417973-logs\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.337830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-config-data\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.338337 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-config-data\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.338969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.338353 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.352467 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7ljk\" (UniqueName: \"kubernetes.io/projected/f972542f-a24d-4356-b5e1-2c3bbb87872f-kube-api-access-n7ljk\") pod \"nova-scheduler-0\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.364181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx62z\" (UniqueName: \"kubernetes.io/projected/89af48c1-24c2-4f59-a5f2-574e06417973-kube-api-access-fx62z\") pod \"nova-api-0\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.440165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.479058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:30:35 crc kubenswrapper[4760]: I1125 08:30:35.936335 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:30:35 crc kubenswrapper[4760]: W1125 08:30:35.940887 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf972542f_a24d_4356_b5e1_2c3bbb87872f.slice/crio-4e7f417da4fb7077f67baa3170faa1e4c801d88fba2f5021e0a83fecbebe5c8a WatchSource:0}: Error finding container 4e7f417da4fb7077f67baa3170faa1e4c801d88fba2f5021e0a83fecbebe5c8a: Status 404 returned error can't find the container with id 4e7f417da4fb7077f67baa3170faa1e4c801d88fba2f5021e0a83fecbebe5c8a Nov 25 08:30:36 crc kubenswrapper[4760]: I1125 08:30:36.019829 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:36 crc kubenswrapper[4760]: W1125 08:30:36.021786 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89af48c1_24c2_4f59_a5f2_574e06417973.slice/crio-b8e31a3a849cd2f568d241dba5f8773b913e41a2c12f604b3a7ec02cbc83a680 WatchSource:0}: Error finding container b8e31a3a849cd2f568d241dba5f8773b913e41a2c12f604b3a7ec02cbc83a680: Status 404 returned error can't find the container with id b8e31a3a849cd2f568d241dba5f8773b913e41a2c12f604b3a7ec02cbc83a680 Nov 25 08:30:36 crc kubenswrapper[4760]: I1125 08:30:36.040717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f972542f-a24d-4356-b5e1-2c3bbb87872f","Type":"ContainerStarted","Data":"4e7f417da4fb7077f67baa3170faa1e4c801d88fba2f5021e0a83fecbebe5c8a"} Nov 25 08:30:36 crc kubenswrapper[4760]: I1125 08:30:36.043799 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89af48c1-24c2-4f59-a5f2-574e06417973","Type":"ContainerStarted","Data":"b8e31a3a849cd2f568d241dba5f8773b913e41a2c12f604b3a7ec02cbc83a680"} Nov 25 08:30:36 crc kubenswrapper[4760]: I1125 08:30:36.687182 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 08:30:36 crc kubenswrapper[4760]: I1125 08:30:36.687828 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 08:30:36 crc kubenswrapper[4760]: I1125 08:30:36.986052 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863b3b4b-8314-4de5-9f3d-29b7028dbd6e" path="/var/lib/kubelet/pods/863b3b4b-8314-4de5-9f3d-29b7028dbd6e/volumes" Nov 25 08:30:36 crc kubenswrapper[4760]: I1125 08:30:36.986801 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9e4283-c692-42e6-9205-d00799923720" path="/var/lib/kubelet/pods/fd9e4283-c692-42e6-9205-d00799923720/volumes" Nov 25 08:30:37 crc kubenswrapper[4760]: I1125 08:30:37.054149 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f972542f-a24d-4356-b5e1-2c3bbb87872f","Type":"ContainerStarted","Data":"715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34"} Nov 25 08:30:37 crc kubenswrapper[4760]: I1125 08:30:37.056039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89af48c1-24c2-4f59-a5f2-574e06417973","Type":"ContainerStarted","Data":"3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794"} Nov 25 08:30:37 crc kubenswrapper[4760]: I1125 08:30:37.056090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89af48c1-24c2-4f59-a5f2-574e06417973","Type":"ContainerStarted","Data":"380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca"} Nov 25 08:30:37 crc kubenswrapper[4760]: I1125 08:30:37.080712 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.080692066 podStartE2EDuration="2.080692066s" podCreationTimestamp="2025-11-25 08:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:37.071753518 +0000 UTC m=+1170.780784313" watchObservedRunningTime="2025-11-25 08:30:37.080692066 +0000 UTC m=+1170.789722861" Nov 25 08:30:37 crc kubenswrapper[4760]: I1125 08:30:37.092901 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.092884657 podStartE2EDuration="2.092884657s" podCreationTimestamp="2025-11-25 08:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:37.086416241 +0000 UTC m=+1170.795447056" watchObservedRunningTime="2025-11-25 08:30:37.092884657 +0000 UTC m=+1170.801915452" Nov 25 08:30:40 crc kubenswrapper[4760]: I1125 08:30:40.440446 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 08:30:41 crc kubenswrapper[4760]: I1125 08:30:41.454622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 08:30:41 crc kubenswrapper[4760]: I1125 08:30:41.687241 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 08:30:41 crc kubenswrapper[4760]: I1125 08:30:41.687630 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 08:30:42 crc kubenswrapper[4760]: I1125 08:30:42.705493 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:30:42 crc kubenswrapper[4760]: I1125 08:30:42.705513 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 08:30:45 crc kubenswrapper[4760]: I1125 08:30:45.441379 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 08:30:45 crc kubenswrapper[4760]: I1125 08:30:45.474189 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 08:30:45 crc kubenswrapper[4760]: I1125 08:30:45.480606 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:30:45 crc kubenswrapper[4760]: I1125 08:30:45.480671 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:30:46 crc kubenswrapper[4760]: I1125 08:30:46.166433 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 08:30:46 crc kubenswrapper[4760]: I1125 08:30:46.522682 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:30:46 crc kubenswrapper[4760]: I1125 08:30:46.563653 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:30:51 crc kubenswrapper[4760]: I1125 08:30:51.698038 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 08:30:51 crc kubenswrapper[4760]: I1125 08:30:51.699441 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 08:30:51 crc kubenswrapper[4760]: I1125 08:30:51.716202 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 08:30:52 crc kubenswrapper[4760]: I1125 08:30:52.192774 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.197954 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.206708 4760 generic.go:334] "Generic (PLEG): container finished" podID="c606d7ed-7669-4df1-bc31-851c14fdbc73" containerID="87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24" exitCode=137 Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.206758 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.206805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c606d7ed-7669-4df1-bc31-851c14fdbc73","Type":"ContainerDied","Data":"87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24"} Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.206830 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c606d7ed-7669-4df1-bc31-851c14fdbc73","Type":"ContainerDied","Data":"978879b30522efa8d0bd0e51ed1a565bf3d5644465d1a8ce440a773bb1c6c3c7"} Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.206845 4760 scope.go:117] "RemoveContainer" containerID="87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.234517 4760 scope.go:117] "RemoveContainer" containerID="87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24" Nov 25 08:30:54 crc kubenswrapper[4760]: E1125 08:30:54.234948 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24\": container with ID starting with 87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24 not found: ID does not exist" containerID="87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.234982 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24"} err="failed to get container status \"87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24\": rpc error: code = NotFound desc = could not find container \"87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24\": container with ID starting with 87d55b96c1a697b4f564e31ae13f1f7c064f9e2e03d177623b106ea469d3ac24 not found: ID does not exist" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.397892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-combined-ca-bundle\") pod \"c606d7ed-7669-4df1-bc31-851c14fdbc73\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.398300 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z8vt\" (UniqueName: \"kubernetes.io/projected/c606d7ed-7669-4df1-bc31-851c14fdbc73-kube-api-access-8z8vt\") pod \"c606d7ed-7669-4df1-bc31-851c14fdbc73\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.398423 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-config-data\") pod \"c606d7ed-7669-4df1-bc31-851c14fdbc73\" (UID: \"c606d7ed-7669-4df1-bc31-851c14fdbc73\") " Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.413494 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c606d7ed-7669-4df1-bc31-851c14fdbc73-kube-api-access-8z8vt" (OuterVolumeSpecName: "kube-api-access-8z8vt") pod "c606d7ed-7669-4df1-bc31-851c14fdbc73" (UID: "c606d7ed-7669-4df1-bc31-851c14fdbc73"). InnerVolumeSpecName "kube-api-access-8z8vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.431199 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-config-data" (OuterVolumeSpecName: "config-data") pod "c606d7ed-7669-4df1-bc31-851c14fdbc73" (UID: "c606d7ed-7669-4df1-bc31-851c14fdbc73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.452392 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c606d7ed-7669-4df1-bc31-851c14fdbc73" (UID: "c606d7ed-7669-4df1-bc31-851c14fdbc73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.501090 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.501156 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c606d7ed-7669-4df1-bc31-851c14fdbc73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.501169 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z8vt\" (UniqueName: \"kubernetes.io/projected/c606d7ed-7669-4df1-bc31-851c14fdbc73-kube-api-access-8z8vt\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.543994 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.557960 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.575439 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:54 crc kubenswrapper[4760]: E1125 08:30:54.575863 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c606d7ed-7669-4df1-bc31-851c14fdbc73" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.575884 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c606d7ed-7669-4df1-bc31-851c14fdbc73" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.576107 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c606d7ed-7669-4df1-bc31-851c14fdbc73" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.577555 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.579826 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.581075 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.581600 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.593100 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.601800 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.601836 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4lts\" (UniqueName: \"kubernetes.io/projected/012fc757-399f-4a14-9ef8-332e3c34f53a-kube-api-access-n4lts\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.601932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.601997 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.602148 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.703570 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.703650 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4lts\" (UniqueName: \"kubernetes.io/projected/012fc757-399f-4a14-9ef8-332e3c34f53a-kube-api-access-n4lts\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.703708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.703745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.703784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.708428 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.708558 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.712882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.712922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/012fc757-399f-4a14-9ef8-332e3c34f53a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.720975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4lts\" (UniqueName: \"kubernetes.io/projected/012fc757-399f-4a14-9ef8-332e3c34f53a-kube-api-access-n4lts\") pod \"nova-cell1-novncproxy-0\" (UID: \"012fc757-399f-4a14-9ef8-332e3c34f53a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.901143 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:54 crc kubenswrapper[4760]: I1125 08:30:54.949111 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c606d7ed-7669-4df1-bc31-851c14fdbc73" path="/var/lib/kubelet/pods/c606d7ed-7669-4df1-bc31-851c14fdbc73/volumes" Nov 25 08:30:55 crc kubenswrapper[4760]: I1125 08:30:55.244682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 08:30:55 crc kubenswrapper[4760]: I1125 08:30:55.355778 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 08:30:55 crc kubenswrapper[4760]: W1125 08:30:55.356976 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod012fc757_399f_4a14_9ef8_332e3c34f53a.slice/crio-7651bafdd9355e72cc55d45c83d6bcaa51ccd19b7fb4e6d42d8265c17c739603 WatchSource:0}: Error finding container 7651bafdd9355e72cc55d45c83d6bcaa51ccd19b7fb4e6d42d8265c17c739603: Status 404 returned error can't find the container with id 7651bafdd9355e72cc55d45c83d6bcaa51ccd19b7fb4e6d42d8265c17c739603 Nov 25 08:30:55 crc kubenswrapper[4760]: I1125 08:30:55.486809 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 08:30:55 crc kubenswrapper[4760]: I1125 08:30:55.487915 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 08:30:55 crc kubenswrapper[4760]: I1125 08:30:55.494620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 08:30:55 crc kubenswrapper[4760]: I1125 08:30:55.497526 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.227970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"012fc757-399f-4a14-9ef8-332e3c34f53a","Type":"ContainerStarted","Data":"047ddb64682af82d8051d4bd32135c576d63e9724e694a37f440c2edecb6b7fd"} Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.228004 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"012fc757-399f-4a14-9ef8-332e3c34f53a","Type":"ContainerStarted","Data":"7651bafdd9355e72cc55d45c83d6bcaa51ccd19b7fb4e6d42d8265c17c739603"} Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.228023 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.231728 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.251713 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.251689844 podStartE2EDuration="2.251689844s" podCreationTimestamp="2025-11-25 08:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:56.242020976 +0000 UTC m=+1189.951051781" watchObservedRunningTime="2025-11-25 08:30:56.251689844 +0000 UTC m=+1189.960720659" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.404096 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c9b558957-mx6l9"] Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.406216 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.458559 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9b558957-mx6l9"] Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.540871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-sb\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.540941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-config\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.541008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-nb\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.541036 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-dns-svc\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.541072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7kf\" (UniqueName: \"kubernetes.io/projected/dcc85bf3-1602-4530-ab07-c3f12b365f5e-kube-api-access-cm7kf\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.642755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-sb\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.642816 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-config\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.642868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-nb\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.642909 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-dns-svc\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.643108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7kf\" (UniqueName: \"kubernetes.io/projected/dcc85bf3-1602-4530-ab07-c3f12b365f5e-kube-api-access-cm7kf\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.643849 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-sb\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.643985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-dns-svc\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.644034 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-config\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.644601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-nb\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.665345 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7kf\" (UniqueName: \"kubernetes.io/projected/dcc85bf3-1602-4530-ab07-c3f12b365f5e-kube-api-access-cm7kf\") pod \"dnsmasq-dns-c9b558957-mx6l9\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:56 crc kubenswrapper[4760]: I1125 08:30:56.741363 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:57 crc kubenswrapper[4760]: I1125 08:30:57.327609 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9b558957-mx6l9"] Nov 25 08:30:57 crc kubenswrapper[4760]: W1125 08:30:57.329673 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcc85bf3_1602_4530_ab07_c3f12b365f5e.slice/crio-1e2e89a06f8aa0cba9a96a04d3a8d6661ff2279fa59014853775d30b194cbd09 WatchSource:0}: Error finding container 1e2e89a06f8aa0cba9a96a04d3a8d6661ff2279fa59014853775d30b194cbd09: Status 404 returned error can't find the container with id 1e2e89a06f8aa0cba9a96a04d3a8d6661ff2279fa59014853775d30b194cbd09 Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.245501 4760 generic.go:334] "Generic (PLEG): container finished" podID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerID="fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8" exitCode=0 Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.245553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" event={"ID":"dcc85bf3-1602-4530-ab07-c3f12b365f5e","Type":"ContainerDied","Data":"fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8"} Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.246059 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" event={"ID":"dcc85bf3-1602-4530-ab07-c3f12b365f5e","Type":"ContainerStarted","Data":"1e2e89a06f8aa0cba9a96a04d3a8d6661ff2279fa59014853775d30b194cbd09"} Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.760512 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.761067 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-central-agent" containerID="cri-o://e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d" gracePeriod=30 Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.761135 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="proxy-httpd" containerID="cri-o://9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b" gracePeriod=30 Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.761187 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="sg-core" containerID="cri-o://5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b" gracePeriod=30 Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.761292 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-notification-agent" containerID="cri-o://9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0" gracePeriod=30 Nov 25 08:30:58 crc kubenswrapper[4760]: I1125 08:30:58.911778 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.255983 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" event={"ID":"dcc85bf3-1602-4530-ab07-c3f12b365f5e","Type":"ContainerStarted","Data":"cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c"} Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.256111 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.259701 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerID="9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b" exitCode=0 Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.259738 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerID="5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b" exitCode=2 Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.259802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerDied","Data":"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b"} Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.260031 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerDied","Data":"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b"} Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.260090 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-log" containerID="cri-o://380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca" gracePeriod=30 Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.260215 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-api" containerID="cri-o://3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794" gracePeriod=30 Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.282449 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" podStartSLOduration=3.282428986 podStartE2EDuration="3.282428986s" podCreationTimestamp="2025-11-25 08:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:59.276751812 +0000 UTC m=+1192.985782617" watchObservedRunningTime="2025-11-25 08:30:59.282428986 +0000 UTC m=+1192.991459781" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.654631 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.820703 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-run-httpd\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.820764 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-scripts\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.820859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-combined-ca-bundle\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.820933 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-log-httpd\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.821210 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.821450 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.820969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-ceilometer-tls-certs\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.821588 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-config-data\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.821985 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-sg-core-conf-yaml\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.822030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqq8w\" (UniqueName: \"kubernetes.io/projected/2b8a311c-357e-41f5-9973-6d4f966f96af-kube-api-access-cqq8w\") pod \"2b8a311c-357e-41f5-9973-6d4f966f96af\" (UID: \"2b8a311c-357e-41f5-9973-6d4f966f96af\") " Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.822486 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.822501 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b8a311c-357e-41f5-9973-6d4f966f96af-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.826126 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8a311c-357e-41f5-9973-6d4f966f96af-kube-api-access-cqq8w" (OuterVolumeSpecName: "kube-api-access-cqq8w") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "kube-api-access-cqq8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.832491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-scripts" (OuterVolumeSpecName: "scripts") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.855733 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.882331 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.902431 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.914389 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.924130 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.924169 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.924181 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.924195 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqq8w\" (UniqueName: \"kubernetes.io/projected/2b8a311c-357e-41f5-9973-6d4f966f96af-kube-api-access-cqq8w\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.924209 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:59 crc kubenswrapper[4760]: I1125 08:30:59.931861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-config-data" (OuterVolumeSpecName: "config-data") pod "2b8a311c-357e-41f5-9973-6d4f966f96af" (UID: "2b8a311c-357e-41f5-9973-6d4f966f96af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.026179 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8a311c-357e-41f5-9973-6d4f966f96af-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.269699 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerID="9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0" exitCode=0 Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.269734 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerID="e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d" exitCode=0 Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.269788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerDied","Data":"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0"} Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.269821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerDied","Data":"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d"} Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.269833 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b8a311c-357e-41f5-9973-6d4f966f96af","Type":"ContainerDied","Data":"fd7368c0ee6f5e9732fd1c0398575d9e26bb582db4967de7a10066fda472edb1"} Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.269849 4760 scope.go:117] "RemoveContainer" containerID="9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.269989 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.275089 4760 generic.go:334] "Generic (PLEG): container finished" podID="89af48c1-24c2-4f59-a5f2-574e06417973" containerID="380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca" exitCode=143 Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.276088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89af48c1-24c2-4f59-a5f2-574e06417973","Type":"ContainerDied","Data":"380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca"} Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.298491 4760 scope.go:117] "RemoveContainer" containerID="5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.316881 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.352645 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.353055 4760 scope.go:117] "RemoveContainer" containerID="9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.366482 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.367362 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-notification-agent" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.367727 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-notification-agent" Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.367805 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-central-agent" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.367859 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-central-agent" Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.367932 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="sg-core" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.367986 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="sg-core" Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.368049 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="proxy-httpd" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.368106 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="proxy-httpd" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.368527 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="proxy-httpd" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.368708 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-central-agent" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.368781 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="ceilometer-notification-agent" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.368846 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" containerName="sg-core" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.372557 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.377341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.377528 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.377591 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.386573 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.412414 4760 scope.go:117] "RemoveContainer" containerID="e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.432600 4760 scope.go:117] "RemoveContainer" containerID="9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b" Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.433093 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b\": container with ID starting with 9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b not found: ID does not exist" containerID="9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.433150 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b"} err="failed to get container status \"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b\": rpc error: code = NotFound desc = could not find container \"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b\": container with ID starting with 9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.433194 4760 scope.go:117] "RemoveContainer" containerID="5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b" Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.433608 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b\": container with ID starting with 5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b not found: ID does not exist" containerID="5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.433715 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b"} err="failed to get container status \"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b\": rpc error: code = NotFound desc = could not find container \"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b\": container with ID starting with 5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.433803 4760 scope.go:117] "RemoveContainer" containerID="9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0" Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.434157 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0\": container with ID starting with 9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0 not found: ID does not exist" containerID="9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.434183 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0"} err="failed to get container status \"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0\": rpc error: code = NotFound desc = could not find container \"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0\": container with ID starting with 9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0 not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.434199 4760 scope.go:117] "RemoveContainer" containerID="e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d" Nov 25 08:31:00 crc kubenswrapper[4760]: E1125 08:31:00.434486 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d\": container with ID starting with e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d not found: ID does not exist" containerID="e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.434591 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d"} err="failed to get container status \"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d\": rpc error: code = NotFound desc = could not find container \"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d\": container with ID starting with e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.434685 4760 scope.go:117] "RemoveContainer" containerID="9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.435074 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b"} err="failed to get container status \"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b\": rpc error: code = NotFound desc = could not find container \"9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b\": container with ID starting with 9f9b1d78ac1c548991ce9402dfbd6064f88217aea7d5a1e79d5938586f0a271b not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.435159 4760 scope.go:117] "RemoveContainer" containerID="5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.435473 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b"} err="failed to get container status \"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b\": rpc error: code = NotFound desc = could not find container \"5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b\": container with ID starting with 5e57d51127035d623c8043e39085fdcd417e498de7aa77bb0d67675e6cea3c1b not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.435494 4760 scope.go:117] "RemoveContainer" containerID="9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.435714 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0"} err="failed to get container status \"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0\": rpc error: code = NotFound desc = could not find container \"9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0\": container with ID starting with 9c13b47b40c27726db0bc464e04256cec1e3e0f328179930eb4f90c93261abc0 not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.435739 4760 scope.go:117] "RemoveContainer" containerID="e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.436116 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d"} err="failed to get container status \"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d\": rpc error: code = NotFound desc = could not find container \"e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d\": container with ID starting with e3937253fe951fc7a2a5fb7dfd2329e99010dd37c8de353fe7a41934fd89d09d not found: ID does not exist" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.567146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-config-data\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.567477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-scripts\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.567623 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gb2\" (UniqueName: \"kubernetes.io/projected/d88591fb-e8b0-4375-a628-a76fe2b480c4-kube-api-access-h2gb2\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.568264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.568465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.568615 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-run-httpd\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.568758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.568879 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-log-httpd\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.676539 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2gb2\" (UniqueName: \"kubernetes.io/projected/d88591fb-e8b0-4375-a628-a76fe2b480c4-kube-api-access-h2gb2\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.676851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.677806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.677948 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-run-httpd\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.678063 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-log-httpd\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.678174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.678504 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-run-httpd\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.678643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-config-data\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.678851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-scripts\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.679026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-log-httpd\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.683185 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.684982 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-scripts\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.685343 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.686434 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.688187 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-config-data\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.697194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2gb2\" (UniqueName: \"kubernetes.io/projected/d88591fb-e8b0-4375-a628-a76fe2b480c4-kube-api-access-h2gb2\") pod \"ceilometer-0\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.711638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.765803 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:00 crc kubenswrapper[4760]: I1125 08:31:00.949386 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b8a311c-357e-41f5-9973-6d4f966f96af" path="/var/lib/kubelet/pods/2b8a311c-357e-41f5-9973-6d4f966f96af/volumes" Nov 25 08:31:01 crc kubenswrapper[4760]: I1125 08:31:01.071587 4760 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc44f13d4-c189-4609-944a-3dbaaee53e6b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc44f13d4-c189-4609-944a-3dbaaee53e6b] : Timed out while waiting for systemd to remove kubepods-besteffort-podc44f13d4_c189_4609_944a_3dbaaee53e6b.slice" Nov 25 08:31:01 crc kubenswrapper[4760]: E1125 08:31:01.071644 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc44f13d4-c189-4609-944a-3dbaaee53e6b] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc44f13d4-c189-4609-944a-3dbaaee53e6b] : Timed out while waiting for systemd to remove kubepods-besteffort-podc44f13d4_c189_4609_944a_3dbaaee53e6b.slice" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" podUID="c44f13d4-c189-4609-944a-3dbaaee53e6b" Nov 25 08:31:01 crc kubenswrapper[4760]: I1125 08:31:01.157985 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:01 crc kubenswrapper[4760]: W1125 08:31:01.159194 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd88591fb_e8b0_4375_a628_a76fe2b480c4.slice/crio-8f6e8ea7e0b01d15b66b1e249b514f982089b1ab0894384144650f26dccda337 WatchSource:0}: Error finding container 8f6e8ea7e0b01d15b66b1e249b514f982089b1ab0894384144650f26dccda337: Status 404 returned error can't find the container with id 8f6e8ea7e0b01d15b66b1e249b514f982089b1ab0894384144650f26dccda337 Nov 25 08:31:01 crc kubenswrapper[4760]: I1125 08:31:01.283980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerStarted","Data":"8f6e8ea7e0b01d15b66b1e249b514f982089b1ab0894384144650f26dccda337"} Nov 25 08:31:01 crc kubenswrapper[4760]: I1125 08:31:01.285716 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dq2fl" Nov 25 08:31:01 crc kubenswrapper[4760]: I1125 08:31:01.746966 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:31:01 crc kubenswrapper[4760]: I1125 08:31:01.747053 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:31:02 crc kubenswrapper[4760]: I1125 08:31:02.297242 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerStarted","Data":"667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6"} Nov 25 08:31:02 crc kubenswrapper[4760]: I1125 08:31:02.902855 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.024802 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-combined-ca-bundle\") pod \"89af48c1-24c2-4f59-a5f2-574e06417973\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.024906 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-config-data\") pod \"89af48c1-24c2-4f59-a5f2-574e06417973\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.025371 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89af48c1-24c2-4f59-a5f2-574e06417973-logs\") pod \"89af48c1-24c2-4f59-a5f2-574e06417973\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.025635 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx62z\" (UniqueName: \"kubernetes.io/projected/89af48c1-24c2-4f59-a5f2-574e06417973-kube-api-access-fx62z\") pod \"89af48c1-24c2-4f59-a5f2-574e06417973\" (UID: \"89af48c1-24c2-4f59-a5f2-574e06417973\") " Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.025883 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89af48c1-24c2-4f59-a5f2-574e06417973-logs" (OuterVolumeSpecName: "logs") pod "89af48c1-24c2-4f59-a5f2-574e06417973" (UID: "89af48c1-24c2-4f59-a5f2-574e06417973"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.026555 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89af48c1-24c2-4f59-a5f2-574e06417973-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.030961 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89af48c1-24c2-4f59-a5f2-574e06417973-kube-api-access-fx62z" (OuterVolumeSpecName: "kube-api-access-fx62z") pod "89af48c1-24c2-4f59-a5f2-574e06417973" (UID: "89af48c1-24c2-4f59-a5f2-574e06417973"). InnerVolumeSpecName "kube-api-access-fx62z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.056710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89af48c1-24c2-4f59-a5f2-574e06417973" (UID: "89af48c1-24c2-4f59-a5f2-574e06417973"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.058605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-config-data" (OuterVolumeSpecName: "config-data") pod "89af48c1-24c2-4f59-a5f2-574e06417973" (UID: "89af48c1-24c2-4f59-a5f2-574e06417973"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.130492 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.130530 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89af48c1-24c2-4f59-a5f2-574e06417973-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.130540 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx62z\" (UniqueName: \"kubernetes.io/projected/89af48c1-24c2-4f59-a5f2-574e06417973-kube-api-access-fx62z\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.307202 4760 generic.go:334] "Generic (PLEG): container finished" podID="89af48c1-24c2-4f59-a5f2-574e06417973" containerID="3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794" exitCode=0 Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.307256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89af48c1-24c2-4f59-a5f2-574e06417973","Type":"ContainerDied","Data":"3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794"} Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.307313 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89af48c1-24c2-4f59-a5f2-574e06417973","Type":"ContainerDied","Data":"b8e31a3a849cd2f568d241dba5f8773b913e41a2c12f604b3a7ec02cbc83a680"} Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.307332 4760 scope.go:117] "RemoveContainer" containerID="3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.307345 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.314533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerStarted","Data":"737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a"} Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.339701 4760 scope.go:117] "RemoveContainer" containerID="380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.353946 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.366403 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.374046 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:03 crc kubenswrapper[4760]: E1125 08:31:03.374479 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-log" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.374493 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-log" Nov 25 08:31:03 crc kubenswrapper[4760]: E1125 08:31:03.374520 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-api" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.374527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-api" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.374742 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-api" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.374768 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" containerName="nova-api-log" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.375771 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.379130 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.379282 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.379320 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.383801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.421002 4760 scope.go:117] "RemoveContainer" containerID="3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794" Nov 25 08:31:03 crc kubenswrapper[4760]: E1125 08:31:03.421662 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794\": container with ID starting with 3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794 not found: ID does not exist" containerID="3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.421702 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794"} err="failed to get container status \"3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794\": rpc error: code = NotFound desc = could not find container \"3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794\": container with ID starting with 3c68da0fa6564e21e2b703745455f21745d38dbb5f049006ef1e9d9bd9988794 not found: ID does not exist" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.421729 4760 scope.go:117] "RemoveContainer" containerID="380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca" Nov 25 08:31:03 crc kubenswrapper[4760]: E1125 08:31:03.422027 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca\": container with ID starting with 380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca not found: ID does not exist" containerID="380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.422056 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca"} err="failed to get container status \"380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca\": rpc error: code = NotFound desc = could not find container \"380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca\": container with ID starting with 380afe4b19b4d4f3a02e5d8e2725aa71b2af66b47713d6a77d139ed31869b6ca not found: ID does not exist" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.537819 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-config-data\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.537871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-public-tls-certs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.537911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7nb5\" (UniqueName: \"kubernetes.io/projected/9659389f-f918-427f-bad4-520ded46d858-kube-api-access-k7nb5\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.538543 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.538607 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9659389f-f918-427f-bad4-520ded46d858-logs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.538645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.640220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.640321 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-config-data\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.640347 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-public-tls-certs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.640387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7nb5\" (UniqueName: \"kubernetes.io/projected/9659389f-f918-427f-bad4-520ded46d858-kube-api-access-k7nb5\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.640444 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.640492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9659389f-f918-427f-bad4-520ded46d858-logs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.641036 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9659389f-f918-427f-bad4-520ded46d858-logs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.647006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.649112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.649192 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-public-tls-certs\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.649368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-config-data\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.659000 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7nb5\" (UniqueName: \"kubernetes.io/projected/9659389f-f918-427f-bad4-520ded46d858-kube-api-access-k7nb5\") pod \"nova-api-0\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " pod="openstack/nova-api-0" Nov 25 08:31:03 crc kubenswrapper[4760]: I1125 08:31:03.704041 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:04 crc kubenswrapper[4760]: I1125 08:31:04.165103 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:04 crc kubenswrapper[4760]: I1125 08:31:04.328491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9659389f-f918-427f-bad4-520ded46d858","Type":"ContainerStarted","Data":"0d5748d4f6a2f24e5f4d4fa5e11eff5bfc254229a640a00c1fff4cba554c982c"} Nov 25 08:31:04 crc kubenswrapper[4760]: I1125 08:31:04.333023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerStarted","Data":"acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399"} Nov 25 08:31:04 crc kubenswrapper[4760]: I1125 08:31:04.901578 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:31:04 crc kubenswrapper[4760]: I1125 08:31:04.919546 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:31:04 crc kubenswrapper[4760]: I1125 08:31:04.949809 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89af48c1-24c2-4f59-a5f2-574e06417973" path="/var/lib/kubelet/pods/89af48c1-24c2-4f59-a5f2-574e06417973/volumes" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.349283 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9659389f-f918-427f-bad4-520ded46d858","Type":"ContainerStarted","Data":"7d1cb2edfb16ed656efcfcefd1d64130a310900faedb87172dac7a52d153f722"} Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.353744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9659389f-f918-427f-bad4-520ded46d858","Type":"ContainerStarted","Data":"245a178b2930d0ff51be2883c296c219579133dd773354559ff1a983577f56ff"} Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.364875 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-central-agent" containerID="cri-o://667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6" gracePeriod=30 Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.365521 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="sg-core" containerID="cri-o://acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399" gracePeriod=30 Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.365587 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerStarted","Data":"1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d"} Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.365614 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-notification-agent" containerID="cri-o://737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a" gracePeriod=30 Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.365644 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="proxy-httpd" containerID="cri-o://1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d" gracePeriod=30 Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.365651 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.378999 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.37897478 podStartE2EDuration="2.37897478s" podCreationTimestamp="2025-11-25 08:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:31:05.373201513 +0000 UTC m=+1199.082232328" watchObservedRunningTime="2025-11-25 08:31:05.37897478 +0000 UTC m=+1199.088005575" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.388920 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.413200 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.173325189 podStartE2EDuration="5.413181795s" podCreationTimestamp="2025-11-25 08:31:00 +0000 UTC" firstStartedPulling="2025-11-25 08:31:01.164677301 +0000 UTC m=+1194.873708096" lastFinishedPulling="2025-11-25 08:31:04.404533907 +0000 UTC m=+1198.113564702" observedRunningTime="2025-11-25 08:31:05.405979188 +0000 UTC m=+1199.115010003" watchObservedRunningTime="2025-11-25 08:31:05.413181795 +0000 UTC m=+1199.122212590" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.566753 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6r8p7"] Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.567996 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.573520 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.573753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.592335 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r8p7"] Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.689186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.689290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-config-data\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.689440 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdhv\" (UniqueName: \"kubernetes.io/projected/7cc08500-3352-47d3-97f8-c269676edd00-kube-api-access-bgdhv\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.689495 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-scripts\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.791672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.791717 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-config-data\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.791788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdhv\" (UniqueName: \"kubernetes.io/projected/7cc08500-3352-47d3-97f8-c269676edd00-kube-api-access-bgdhv\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.791839 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-scripts\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.797582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-config-data\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.800979 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-scripts\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.801718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.812661 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdhv\" (UniqueName: \"kubernetes.io/projected/7cc08500-3352-47d3-97f8-c269676edd00-kube-api-access-bgdhv\") pod \"nova-cell1-cell-mapping-6r8p7\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:05 crc kubenswrapper[4760]: I1125 08:31:05.942674 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.374783 4760 generic.go:334] "Generic (PLEG): container finished" podID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerID="1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d" exitCode=0 Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.375067 4760 generic.go:334] "Generic (PLEG): container finished" podID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerID="acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399" exitCode=2 Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.375076 4760 generic.go:334] "Generic (PLEG): container finished" podID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerID="737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a" exitCode=0 Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.374927 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerDied","Data":"1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d"} Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.375370 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerDied","Data":"acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399"} Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.375384 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerDied","Data":"737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a"} Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.396491 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r8p7"] Nov 25 08:31:06 crc kubenswrapper[4760]: W1125 08:31:06.406641 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cc08500_3352_47d3_97f8_c269676edd00.slice/crio-06cf21fcc7bddf656e2fedc0b12024a44763e49583f7b8f5029c6cc8aa329b91 WatchSource:0}: Error finding container 06cf21fcc7bddf656e2fedc0b12024a44763e49583f7b8f5029c6cc8aa329b91: Status 404 returned error can't find the container with id 06cf21fcc7bddf656e2fedc0b12024a44763e49583f7b8f5029c6cc8aa329b91 Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.744266 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.816861 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69494d9f89-bwbsn"] Nov 25 08:31:06 crc kubenswrapper[4760]: I1125 08:31:06.817147 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" podUID="54bb51e4-6152-41e2-9489-b06e33c16177" containerName="dnsmasq-dns" containerID="cri-o://91bbd968de891dd2c3721d8043ea159565fda7da0c970a5aec82886f9b908206" gracePeriod=10 Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.029143 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.225599 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2gb2\" (UniqueName: \"kubernetes.io/projected/d88591fb-e8b0-4375-a628-a76fe2b480c4-kube-api-access-h2gb2\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.226118 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-run-httpd\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.226172 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-scripts\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.226200 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-ceilometer-tls-certs\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.226261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-log-httpd\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.226543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-config-data\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.226568 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-combined-ca-bundle\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.226606 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-sg-core-conf-yaml\") pod \"d88591fb-e8b0-4375-a628-a76fe2b480c4\" (UID: \"d88591fb-e8b0-4375-a628-a76fe2b480c4\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.228480 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.229031 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.248460 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88591fb-e8b0-4375-a628-a76fe2b480c4-kube-api-access-h2gb2" (OuterVolumeSpecName: "kube-api-access-h2gb2") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "kube-api-access-h2gb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.251961 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-scripts" (OuterVolumeSpecName: "scripts") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.271764 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.312623 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.328570 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2gb2\" (UniqueName: \"kubernetes.io/projected/d88591fb-e8b0-4375-a628-a76fe2b480c4-kube-api-access-h2gb2\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.328598 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.328609 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.328619 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.328627 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88591fb-e8b0-4375-a628-a76fe2b480c4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.328634 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.338857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.360659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-config-data" (OuterVolumeSpecName: "config-data") pod "d88591fb-e8b0-4375-a628-a76fe2b480c4" (UID: "d88591fb-e8b0-4375-a628-a76fe2b480c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.384727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r8p7" event={"ID":"7cc08500-3352-47d3-97f8-c269676edd00","Type":"ContainerStarted","Data":"276fb59c2db16b6cfacf73005e89578f1431cc21ab51639bb8d7463f11c3c746"} Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.384775 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r8p7" event={"ID":"7cc08500-3352-47d3-97f8-c269676edd00","Type":"ContainerStarted","Data":"06cf21fcc7bddf656e2fedc0b12024a44763e49583f7b8f5029c6cc8aa329b91"} Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.386489 4760 generic.go:334] "Generic (PLEG): container finished" podID="54bb51e4-6152-41e2-9489-b06e33c16177" containerID="91bbd968de891dd2c3721d8043ea159565fda7da0c970a5aec82886f9b908206" exitCode=0 Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.386573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" event={"ID":"54bb51e4-6152-41e2-9489-b06e33c16177","Type":"ContainerDied","Data":"91bbd968de891dd2c3721d8043ea159565fda7da0c970a5aec82886f9b908206"} Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.386612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" event={"ID":"54bb51e4-6152-41e2-9489-b06e33c16177","Type":"ContainerDied","Data":"9070d983632fc8fa7c3e75af2c30009baed676026eb5dc4c5852824bc91bb936"} Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.386627 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9070d983632fc8fa7c3e75af2c30009baed676026eb5dc4c5852824bc91bb936" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.389936 4760 generic.go:334] "Generic (PLEG): container finished" podID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerID="667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6" exitCode=0 Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.390617 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.401176 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerDied","Data":"667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6"} Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.401262 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88591fb-e8b0-4375-a628-a76fe2b480c4","Type":"ContainerDied","Data":"8f6e8ea7e0b01d15b66b1e249b514f982089b1ab0894384144650f26dccda337"} Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.401290 4760 scope.go:117] "RemoveContainer" containerID="1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.405454 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6r8p7" podStartSLOduration=2.405432149 podStartE2EDuration="2.405432149s" podCreationTimestamp="2025-11-25 08:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:31:07.401804004 +0000 UTC m=+1201.110834799" watchObservedRunningTime="2025-11-25 08:31:07.405432149 +0000 UTC m=+1201.114462934" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.430120 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.430161 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88591fb-e8b0-4375-a628-a76fe2b480c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.454310 4760 scope.go:117] "RemoveContainer" containerID="acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.459724 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.468281 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.482452 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.495644 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.496225 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bb51e4-6152-41e2-9489-b06e33c16177" containerName="dnsmasq-dns" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496262 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bb51e4-6152-41e2-9489-b06e33c16177" containerName="dnsmasq-dns" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.496296 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="sg-core" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496305 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="sg-core" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.496323 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-notification-agent" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496332 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-notification-agent" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.496347 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="proxy-httpd" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496354 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="proxy-httpd" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.496373 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-central-agent" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496381 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-central-agent" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.496396 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bb51e4-6152-41e2-9489-b06e33c16177" containerName="init" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496404 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bb51e4-6152-41e2-9489-b06e33c16177" containerName="init" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496622 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-central-agent" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496652 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="proxy-httpd" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496663 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bb51e4-6152-41e2-9489-b06e33c16177" containerName="dnsmasq-dns" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496679 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="ceilometer-notification-agent" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.496694 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" containerName="sg-core" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.520283 4760 scope.go:117] "RemoveContainer" containerID="737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.522227 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.530062 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.530267 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.532351 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539098 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-log-httpd\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539284 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26wz8\" (UniqueName: \"kubernetes.io/projected/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-kube-api-access-26wz8\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539317 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-scripts\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539375 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-config-data\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539403 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.539467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-run-httpd\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.570603 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.582473 4760 scope.go:117] "RemoveContainer" containerID="667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.635922 4760 scope.go:117] "RemoveContainer" containerID="1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.636288 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d\": container with ID starting with 1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d not found: ID does not exist" containerID="1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.636333 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d"} err="failed to get container status \"1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d\": rpc error: code = NotFound desc = could not find container \"1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d\": container with ID starting with 1129b692e42d4fb07a329033a819efcb23461e79857121718864f3990b8dd87d not found: ID does not exist" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.636362 4760 scope.go:117] "RemoveContainer" containerID="acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.636649 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399\": container with ID starting with acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399 not found: ID does not exist" containerID="acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.636685 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399"} err="failed to get container status \"acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399\": rpc error: code = NotFound desc = could not find container \"acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399\": container with ID starting with acdb68befa30608b44dffb14b64137263e70ad21e8e0962b6fd90d9bb2037399 not found: ID does not exist" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.636705 4760 scope.go:117] "RemoveContainer" containerID="737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.636961 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a\": container with ID starting with 737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a not found: ID does not exist" containerID="737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.636997 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a"} err="failed to get container status \"737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a\": rpc error: code = NotFound desc = could not find container \"737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a\": container with ID starting with 737468913eecb021da309705f219549b92620d39627d086119414c130037cc4a not found: ID does not exist" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.637011 4760 scope.go:117] "RemoveContainer" containerID="667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6" Nov 25 08:31:07 crc kubenswrapper[4760]: E1125 08:31:07.637241 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6\": container with ID starting with 667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6 not found: ID does not exist" containerID="667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.637294 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6"} err="failed to get container status \"667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6\": rpc error: code = NotFound desc = could not find container \"667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6\": container with ID starting with 667ba0bd3bf7ee44f7bbecd4f5ca49d28e043abdf9e03bee1cd0540abc8a11f6 not found: ID does not exist" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640119 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwvd5\" (UniqueName: \"kubernetes.io/projected/54bb51e4-6152-41e2-9489-b06e33c16177-kube-api-access-gwvd5\") pod \"54bb51e4-6152-41e2-9489-b06e33c16177\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640288 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-dns-svc\") pod \"54bb51e4-6152-41e2-9489-b06e33c16177\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640326 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-config\") pod \"54bb51e4-6152-41e2-9489-b06e33c16177\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-nb\") pod \"54bb51e4-6152-41e2-9489-b06e33c16177\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640487 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-sb\") pod \"54bb51e4-6152-41e2-9489-b06e33c16177\" (UID: \"54bb51e4-6152-41e2-9489-b06e33c16177\") " Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640842 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-log-httpd\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26wz8\" (UniqueName: \"kubernetes.io/projected/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-kube-api-access-26wz8\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640927 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640951 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-scripts\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.640980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-config-data\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.641018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.641084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-run-httpd\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.641147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.642579 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-run-httpd\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.642694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-log-httpd\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.646895 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bb51e4-6152-41e2-9489-b06e33c16177-kube-api-access-gwvd5" (OuterVolumeSpecName: "kube-api-access-gwvd5") pod "54bb51e4-6152-41e2-9489-b06e33c16177" (UID: "54bb51e4-6152-41e2-9489-b06e33c16177"). InnerVolumeSpecName "kube-api-access-gwvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.647870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.648167 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-config-data\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.648809 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.649095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-scripts\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.649321 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.660102 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26wz8\" (UniqueName: \"kubernetes.io/projected/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-kube-api-access-26wz8\") pod \"ceilometer-0\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " pod="openstack/ceilometer-0" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.712921 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-config" (OuterVolumeSpecName: "config") pod "54bb51e4-6152-41e2-9489-b06e33c16177" (UID: "54bb51e4-6152-41e2-9489-b06e33c16177"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.721749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54bb51e4-6152-41e2-9489-b06e33c16177" (UID: "54bb51e4-6152-41e2-9489-b06e33c16177"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.728799 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54bb51e4-6152-41e2-9489-b06e33c16177" (UID: "54bb51e4-6152-41e2-9489-b06e33c16177"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.742375 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.742417 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwvd5\" (UniqueName: \"kubernetes.io/projected/54bb51e4-6152-41e2-9489-b06e33c16177-kube-api-access-gwvd5\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.742430 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.742442 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.744785 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54bb51e4-6152-41e2-9489-b06e33c16177" (UID: "54bb51e4-6152-41e2-9489-b06e33c16177"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.843982 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54bb51e4-6152-41e2-9489-b06e33c16177-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:07 crc kubenswrapper[4760]: I1125 08:31:07.866671 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:31:08 crc kubenswrapper[4760]: I1125 08:31:08.400265 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69494d9f89-bwbsn" Nov 25 08:31:08 crc kubenswrapper[4760]: W1125 08:31:08.415745 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefe8cbf4_8fba_4695_9a9a_63e2ffc0c3d2.slice/crio-b0f0cffbecf5b5ceb5bdb36c09f2b787f968ea129a898dca2a667ab7e4a43db3 WatchSource:0}: Error finding container b0f0cffbecf5b5ceb5bdb36c09f2b787f968ea129a898dca2a667ab7e4a43db3: Status 404 returned error can't find the container with id b0f0cffbecf5b5ceb5bdb36c09f2b787f968ea129a898dca2a667ab7e4a43db3 Nov 25 08:31:08 crc kubenswrapper[4760]: I1125 08:31:08.426728 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:31:08 crc kubenswrapper[4760]: I1125 08:31:08.482469 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69494d9f89-bwbsn"] Nov 25 08:31:08 crc kubenswrapper[4760]: I1125 08:31:08.490408 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69494d9f89-bwbsn"] Nov 25 08:31:08 crc kubenswrapper[4760]: I1125 08:31:08.962344 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bb51e4-6152-41e2-9489-b06e33c16177" path="/var/lib/kubelet/pods/54bb51e4-6152-41e2-9489-b06e33c16177/volumes" Nov 25 08:31:08 crc kubenswrapper[4760]: I1125 08:31:08.963620 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88591fb-e8b0-4375-a628-a76fe2b480c4" path="/var/lib/kubelet/pods/d88591fb-e8b0-4375-a628-a76fe2b480c4/volumes" Nov 25 08:31:09 crc kubenswrapper[4760]: I1125 08:31:09.412004 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerStarted","Data":"e2a46bdc2fbac6741e12931c29eaad6684f7f65f0b1d98b01f3a5b613eb7368e"} Nov 25 08:31:09 crc kubenswrapper[4760]: I1125 08:31:09.412381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerStarted","Data":"b0f0cffbecf5b5ceb5bdb36c09f2b787f968ea129a898dca2a667ab7e4a43db3"} Nov 25 08:31:10 crc kubenswrapper[4760]: I1125 08:31:10.426454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerStarted","Data":"aedc630812a97871b6547f6ab3ab006899045c49d912efe4be0b59c71821e111"} Nov 25 08:31:11 crc kubenswrapper[4760]: I1125 08:31:11.441121 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerStarted","Data":"d73cfaacbd00e5adf1c6b21a83bbbe7620706389a272f56e6420a709b5f5636e"} Nov 25 08:31:12 crc kubenswrapper[4760]: I1125 08:31:12.454156 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerStarted","Data":"06ddfc3cea1a203e800d58620becb2c656291710f58ee64b3a0a4e4475e92b16"} Nov 25 08:31:12 crc kubenswrapper[4760]: I1125 08:31:12.454555 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:31:12 crc kubenswrapper[4760]: I1125 08:31:12.457578 4760 generic.go:334] "Generic (PLEG): container finished" podID="7cc08500-3352-47d3-97f8-c269676edd00" containerID="276fb59c2db16b6cfacf73005e89578f1431cc21ab51639bb8d7463f11c3c746" exitCode=0 Nov 25 08:31:12 crc kubenswrapper[4760]: I1125 08:31:12.457622 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r8p7" event={"ID":"7cc08500-3352-47d3-97f8-c269676edd00","Type":"ContainerDied","Data":"276fb59c2db16b6cfacf73005e89578f1431cc21ab51639bb8d7463f11c3c746"} Nov 25 08:31:12 crc kubenswrapper[4760]: I1125 08:31:12.489351 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.241254976 podStartE2EDuration="5.489324877s" podCreationTimestamp="2025-11-25 08:31:07 +0000 UTC" firstStartedPulling="2025-11-25 08:31:08.420676736 +0000 UTC m=+1202.129707531" lastFinishedPulling="2025-11-25 08:31:11.668746617 +0000 UTC m=+1205.377777432" observedRunningTime="2025-11-25 08:31:12.481416429 +0000 UTC m=+1206.190447264" watchObservedRunningTime="2025-11-25 08:31:12.489324877 +0000 UTC m=+1206.198355672" Nov 25 08:31:13 crc kubenswrapper[4760]: I1125 08:31:13.704114 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:31:13 crc kubenswrapper[4760]: I1125 08:31:13.704475 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:31:13 crc kubenswrapper[4760]: I1125 08:31:13.929821 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.096809 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-config-data\") pod \"7cc08500-3352-47d3-97f8-c269676edd00\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.097120 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-combined-ca-bundle\") pod \"7cc08500-3352-47d3-97f8-c269676edd00\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.097408 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-scripts\") pod \"7cc08500-3352-47d3-97f8-c269676edd00\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.097446 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgdhv\" (UniqueName: \"kubernetes.io/projected/7cc08500-3352-47d3-97f8-c269676edd00-kube-api-access-bgdhv\") pod \"7cc08500-3352-47d3-97f8-c269676edd00\" (UID: \"7cc08500-3352-47d3-97f8-c269676edd00\") " Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.103843 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc08500-3352-47d3-97f8-c269676edd00-kube-api-access-bgdhv" (OuterVolumeSpecName: "kube-api-access-bgdhv") pod "7cc08500-3352-47d3-97f8-c269676edd00" (UID: "7cc08500-3352-47d3-97f8-c269676edd00"). InnerVolumeSpecName "kube-api-access-bgdhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.106352 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-scripts" (OuterVolumeSpecName: "scripts") pod "7cc08500-3352-47d3-97f8-c269676edd00" (UID: "7cc08500-3352-47d3-97f8-c269676edd00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.131748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cc08500-3352-47d3-97f8-c269676edd00" (UID: "7cc08500-3352-47d3-97f8-c269676edd00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.132489 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-config-data" (OuterVolumeSpecName: "config-data") pod "7cc08500-3352-47d3-97f8-c269676edd00" (UID: "7cc08500-3352-47d3-97f8-c269676edd00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.200736 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.200780 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgdhv\" (UniqueName: \"kubernetes.io/projected/7cc08500-3352-47d3-97f8-c269676edd00-kube-api-access-bgdhv\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.200800 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.200818 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc08500-3352-47d3-97f8-c269676edd00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.474072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r8p7" event={"ID":"7cc08500-3352-47d3-97f8-c269676edd00","Type":"ContainerDied","Data":"06cf21fcc7bddf656e2fedc0b12024a44763e49583f7b8f5029c6cc8aa329b91"} Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.474408 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06cf21fcc7bddf656e2fedc0b12024a44763e49583f7b8f5029c6cc8aa329b91" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.474111 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r8p7" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.724382 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.724381 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.808974 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.809881 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-log" containerID="cri-o://245a178b2930d0ff51be2883c296c219579133dd773354559ff1a983577f56ff" gracePeriod=30 Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.810354 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-api" containerID="cri-o://7d1cb2edfb16ed656efcfcefd1d64130a310900faedb87172dac7a52d153f722" gracePeriod=30 Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.832691 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.833015 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f972542f-a24d-4356-b5e1-2c3bbb87872f" containerName="nova-scheduler-scheduler" containerID="cri-o://715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34" gracePeriod=30 Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.889312 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.889691 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-log" containerID="cri-o://604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51" gracePeriod=30 Nov 25 08:31:14 crc kubenswrapper[4760]: I1125 08:31:14.890379 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-metadata" containerID="cri-o://8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7" gracePeriod=30 Nov 25 08:31:15 crc kubenswrapper[4760]: E1125 08:31:15.443091 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 08:31:15 crc kubenswrapper[4760]: E1125 08:31:15.445018 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 08:31:15 crc kubenswrapper[4760]: E1125 08:31:15.446276 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 08:31:15 crc kubenswrapper[4760]: E1125 08:31:15.446332 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f972542f-a24d-4356-b5e1-2c3bbb87872f" containerName="nova-scheduler-scheduler" Nov 25 08:31:15 crc kubenswrapper[4760]: I1125 08:31:15.494308 4760 generic.go:334] "Generic (PLEG): container finished" podID="9659389f-f918-427f-bad4-520ded46d858" containerID="245a178b2930d0ff51be2883c296c219579133dd773354559ff1a983577f56ff" exitCode=143 Nov 25 08:31:15 crc kubenswrapper[4760]: I1125 08:31:15.494450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9659389f-f918-427f-bad4-520ded46d858","Type":"ContainerDied","Data":"245a178b2930d0ff51be2883c296c219579133dd773354559ff1a983577f56ff"} Nov 25 08:31:15 crc kubenswrapper[4760]: I1125 08:31:15.499098 4760 generic.go:334] "Generic (PLEG): container finished" podID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerID="604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51" exitCode=143 Nov 25 08:31:15 crc kubenswrapper[4760]: I1125 08:31:15.499188 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5468c668-1624-46e5-964e-d1cdb1f47ab8","Type":"ContainerDied","Data":"604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51"} Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.033076 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": read tcp 10.217.0.2:40182->10.217.0.185:8775: read: connection reset by peer" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.033145 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": read tcp 10.217.0.2:40194->10.217.0.185:8775: read: connection reset by peer" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.415704 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.527651 4760 generic.go:334] "Generic (PLEG): container finished" podID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerID="8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7" exitCode=0 Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.527708 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.527720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5468c668-1624-46e5-964e-d1cdb1f47ab8","Type":"ContainerDied","Data":"8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7"} Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.527779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5468c668-1624-46e5-964e-d1cdb1f47ab8","Type":"ContainerDied","Data":"65077cdb6c24dea1ddccc60e4deef75cb8929747e92d3406038239ecc60d707a"} Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.527801 4760 scope.go:117] "RemoveContainer" containerID="8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.558332 4760 scope.go:117] "RemoveContainer" containerID="604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.577710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-config-data\") pod \"5468c668-1624-46e5-964e-d1cdb1f47ab8\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.577763 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2nx2\" (UniqueName: \"kubernetes.io/projected/5468c668-1624-46e5-964e-d1cdb1f47ab8-kube-api-access-n2nx2\") pod \"5468c668-1624-46e5-964e-d1cdb1f47ab8\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.577840 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5468c668-1624-46e5-964e-d1cdb1f47ab8-logs\") pod \"5468c668-1624-46e5-964e-d1cdb1f47ab8\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.577893 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-combined-ca-bundle\") pod \"5468c668-1624-46e5-964e-d1cdb1f47ab8\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.577918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-nova-metadata-tls-certs\") pod \"5468c668-1624-46e5-964e-d1cdb1f47ab8\" (UID: \"5468c668-1624-46e5-964e-d1cdb1f47ab8\") " Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.579072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5468c668-1624-46e5-964e-d1cdb1f47ab8-logs" (OuterVolumeSpecName: "logs") pod "5468c668-1624-46e5-964e-d1cdb1f47ab8" (UID: "5468c668-1624-46e5-964e-d1cdb1f47ab8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.580807 4760 scope.go:117] "RemoveContainer" containerID="8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7" Nov 25 08:31:18 crc kubenswrapper[4760]: E1125 08:31:18.581853 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7\": container with ID starting with 8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7 not found: ID does not exist" containerID="8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.581927 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7"} err="failed to get container status \"8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7\": rpc error: code = NotFound desc = could not find container \"8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7\": container with ID starting with 8aa1a4703be24732e9fa774570f6241d85a2df18d540078e05983f27fe4d24d7 not found: ID does not exist" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.582211 4760 scope.go:117] "RemoveContainer" containerID="604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51" Nov 25 08:31:18 crc kubenswrapper[4760]: E1125 08:31:18.585082 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51\": container with ID starting with 604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51 not found: ID does not exist" containerID="604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.585189 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51"} err="failed to get container status \"604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51\": rpc error: code = NotFound desc = could not find container \"604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51\": container with ID starting with 604582950f94eb42178b4a17a3e1e3d850bbf15862fa1214aefa3c0548103a51 not found: ID does not exist" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.585371 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5468c668-1624-46e5-964e-d1cdb1f47ab8-kube-api-access-n2nx2" (OuterVolumeSpecName: "kube-api-access-n2nx2") pod "5468c668-1624-46e5-964e-d1cdb1f47ab8" (UID: "5468c668-1624-46e5-964e-d1cdb1f47ab8"). InnerVolumeSpecName "kube-api-access-n2nx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.607302 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5468c668-1624-46e5-964e-d1cdb1f47ab8" (UID: "5468c668-1624-46e5-964e-d1cdb1f47ab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.608429 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-config-data" (OuterVolumeSpecName: "config-data") pod "5468c668-1624-46e5-964e-d1cdb1f47ab8" (UID: "5468c668-1624-46e5-964e-d1cdb1f47ab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.634224 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5468c668-1624-46e5-964e-d1cdb1f47ab8" (UID: "5468c668-1624-46e5-964e-d1cdb1f47ab8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.679845 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.679881 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.679892 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5468c668-1624-46e5-964e-d1cdb1f47ab8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.679901 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2nx2\" (UniqueName: \"kubernetes.io/projected/5468c668-1624-46e5-964e-d1cdb1f47ab8-kube-api-access-n2nx2\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.679912 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5468c668-1624-46e5-964e-d1cdb1f47ab8-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.870896 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.886034 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.893313 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:31:18 crc kubenswrapper[4760]: E1125 08:31:18.893738 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc08500-3352-47d3-97f8-c269676edd00" containerName="nova-manage" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.893759 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc08500-3352-47d3-97f8-c269676edd00" containerName="nova-manage" Nov 25 08:31:18 crc kubenswrapper[4760]: E1125 08:31:18.893779 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-metadata" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.893787 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-metadata" Nov 25 08:31:18 crc kubenswrapper[4760]: E1125 08:31:18.893803 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-log" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.893811 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-log" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.894016 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc08500-3352-47d3-97f8-c269676edd00" containerName="nova-manage" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.894036 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-log" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.894082 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" containerName="nova-metadata-metadata" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.895308 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.899289 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.904553 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.923915 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.951655 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5468c668-1624-46e5-964e-d1cdb1f47ab8" path="/var/lib/kubelet/pods/5468c668-1624-46e5-964e-d1cdb1f47ab8/volumes" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.986115 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.986195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-config-data\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.986227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.986364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf7e8b89-ff82-471f-9255-d3268551c726-logs\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:18 crc kubenswrapper[4760]: I1125 08:31:18.986488 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xjdt\" (UniqueName: \"kubernetes.io/projected/cf7e8b89-ff82-471f-9255-d3268551c726-kube-api-access-4xjdt\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.087518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjdt\" (UniqueName: \"kubernetes.io/projected/cf7e8b89-ff82-471f-9255-d3268551c726-kube-api-access-4xjdt\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.087564 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.087594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-config-data\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.087613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.087666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf7e8b89-ff82-471f-9255-d3268551c726-logs\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.089299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf7e8b89-ff82-471f-9255-d3268551c726-logs\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.093966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-config-data\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.094852 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.095770 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf7e8b89-ff82-471f-9255-d3268551c726-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.102484 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjdt\" (UniqueName: \"kubernetes.io/projected/cf7e8b89-ff82-471f-9255-d3268551c726-kube-api-access-4xjdt\") pod \"nova-metadata-0\" (UID: \"cf7e8b89-ff82-471f-9255-d3268551c726\") " pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.226387 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.539931 4760 generic.go:334] "Generic (PLEG): container finished" podID="f972542f-a24d-4356-b5e1-2c3bbb87872f" containerID="715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34" exitCode=0 Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.540085 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f972542f-a24d-4356-b5e1-2c3bbb87872f","Type":"ContainerDied","Data":"715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34"} Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.640894 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.654745 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 08:31:19 crc kubenswrapper[4760]: W1125 08:31:19.657320 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf7e8b89_ff82_471f_9255_d3268551c726.slice/crio-24b9db76bab9d5d9aa0c975a662614ffff0f4942452022e47afedc61e93befa9 WatchSource:0}: Error finding container 24b9db76bab9d5d9aa0c975a662614ffff0f4942452022e47afedc61e93befa9: Status 404 returned error can't find the container with id 24b9db76bab9d5d9aa0c975a662614ffff0f4942452022e47afedc61e93befa9 Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.799691 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-config-data\") pod \"f972542f-a24d-4356-b5e1-2c3bbb87872f\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.799876 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-combined-ca-bundle\") pod \"f972542f-a24d-4356-b5e1-2c3bbb87872f\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.799927 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7ljk\" (UniqueName: \"kubernetes.io/projected/f972542f-a24d-4356-b5e1-2c3bbb87872f-kube-api-access-n7ljk\") pod \"f972542f-a24d-4356-b5e1-2c3bbb87872f\" (UID: \"f972542f-a24d-4356-b5e1-2c3bbb87872f\") " Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.805395 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f972542f-a24d-4356-b5e1-2c3bbb87872f-kube-api-access-n7ljk" (OuterVolumeSpecName: "kube-api-access-n7ljk") pod "f972542f-a24d-4356-b5e1-2c3bbb87872f" (UID: "f972542f-a24d-4356-b5e1-2c3bbb87872f"). InnerVolumeSpecName "kube-api-access-n7ljk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.829151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-config-data" (OuterVolumeSpecName: "config-data") pod "f972542f-a24d-4356-b5e1-2c3bbb87872f" (UID: "f972542f-a24d-4356-b5e1-2c3bbb87872f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.829522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f972542f-a24d-4356-b5e1-2c3bbb87872f" (UID: "f972542f-a24d-4356-b5e1-2c3bbb87872f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.902118 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.902159 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f972542f-a24d-4356-b5e1-2c3bbb87872f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:19 crc kubenswrapper[4760]: I1125 08:31:19.902174 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7ljk\" (UniqueName: \"kubernetes.io/projected/f972542f-a24d-4356-b5e1-2c3bbb87872f-kube-api-access-n7ljk\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.563853 4760 generic.go:334] "Generic (PLEG): container finished" podID="9659389f-f918-427f-bad4-520ded46d858" containerID="7d1cb2edfb16ed656efcfcefd1d64130a310900faedb87172dac7a52d153f722" exitCode=0 Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.563893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9659389f-f918-427f-bad4-520ded46d858","Type":"ContainerDied","Data":"7d1cb2edfb16ed656efcfcefd1d64130a310900faedb87172dac7a52d153f722"} Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.566928 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f972542f-a24d-4356-b5e1-2c3bbb87872f","Type":"ContainerDied","Data":"4e7f417da4fb7077f67baa3170faa1e4c801d88fba2f5021e0a83fecbebe5c8a"} Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.566953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.566983 4760 scope.go:117] "RemoveContainer" containerID="715a1b717bec4d01db8d283599370bb6c07bc7483a52d97df76eb4748c2a4c34" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.570154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf7e8b89-ff82-471f-9255-d3268551c726","Type":"ContainerStarted","Data":"1771b448ca4a51fb6e0ced68e54124e3f41bdae4cc8aedef289f02bc68457185"} Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.570219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf7e8b89-ff82-471f-9255-d3268551c726","Type":"ContainerStarted","Data":"7e7d5c312c0ee2f7a829fe34884612f39127f9d6cc150b8dea1a86e39e971334"} Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.570234 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf7e8b89-ff82-471f-9255-d3268551c726","Type":"ContainerStarted","Data":"24b9db76bab9d5d9aa0c975a662614ffff0f4942452022e47afedc61e93befa9"} Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.595417 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.595382921 podStartE2EDuration="2.595382921s" podCreationTimestamp="2025-11-25 08:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:31:20.59223997 +0000 UTC m=+1214.301270765" watchObservedRunningTime="2025-11-25 08:31:20.595382921 +0000 UTC m=+1214.304413716" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.632362 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.639470 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.648494 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:31:20 crc kubenswrapper[4760]: E1125 08:31:20.648932 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f972542f-a24d-4356-b5e1-2c3bbb87872f" containerName="nova-scheduler-scheduler" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.648955 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f972542f-a24d-4356-b5e1-2c3bbb87872f" containerName="nova-scheduler-scheduler" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.649406 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f972542f-a24d-4356-b5e1-2c3bbb87872f" containerName="nova-scheduler-scheduler" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.650184 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.651961 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.658185 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.736435 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.741582 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-public-tls-certs\") pod \"9659389f-f918-427f-bad4-520ded46d858\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.741652 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7nb5\" (UniqueName: \"kubernetes.io/projected/9659389f-f918-427f-bad4-520ded46d858-kube-api-access-k7nb5\") pod \"9659389f-f918-427f-bad4-520ded46d858\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.741689 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-internal-tls-certs\") pod \"9659389f-f918-427f-bad4-520ded46d858\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.741738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9659389f-f918-427f-bad4-520ded46d858-logs\") pod \"9659389f-f918-427f-bad4-520ded46d858\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.742655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9659389f-f918-427f-bad4-520ded46d858-logs" (OuterVolumeSpecName: "logs") pod "9659389f-f918-427f-bad4-520ded46d858" (UID: "9659389f-f918-427f-bad4-520ded46d858"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.743032 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-combined-ca-bundle\") pod \"9659389f-f918-427f-bad4-520ded46d858\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.743084 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-config-data\") pod \"9659389f-f918-427f-bad4-520ded46d858\" (UID: \"9659389f-f918-427f-bad4-520ded46d858\") " Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.743431 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvr5r\" (UniqueName: \"kubernetes.io/projected/b4921858-b22b-474b-b8fb-6ccbd97bffac-kube-api-access-vvr5r\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.743465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4921858-b22b-474b-b8fb-6ccbd97bffac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.743485 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4921858-b22b-474b-b8fb-6ccbd97bffac-config-data\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.743871 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9659389f-f918-427f-bad4-520ded46d858-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.748475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9659389f-f918-427f-bad4-520ded46d858-kube-api-access-k7nb5" (OuterVolumeSpecName: "kube-api-access-k7nb5") pod "9659389f-f918-427f-bad4-520ded46d858" (UID: "9659389f-f918-427f-bad4-520ded46d858"). InnerVolumeSpecName "kube-api-access-k7nb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.807702 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9659389f-f918-427f-bad4-520ded46d858" (UID: "9659389f-f918-427f-bad4-520ded46d858"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.818323 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9659389f-f918-427f-bad4-520ded46d858" (UID: "9659389f-f918-427f-bad4-520ded46d858"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.845928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvr5r\" (UniqueName: \"kubernetes.io/projected/b4921858-b22b-474b-b8fb-6ccbd97bffac-kube-api-access-vvr5r\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.846228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4921858-b22b-474b-b8fb-6ccbd97bffac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.846356 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4921858-b22b-474b-b8fb-6ccbd97bffac-config-data\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.846708 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.846795 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7nb5\" (UniqueName: \"kubernetes.io/projected/9659389f-f918-427f-bad4-520ded46d858-kube-api-access-k7nb5\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.846880 4760 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.861381 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4921858-b22b-474b-b8fb-6ccbd97bffac-config-data\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.863383 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-config-data" (OuterVolumeSpecName: "config-data") pod "9659389f-f918-427f-bad4-520ded46d858" (UID: "9659389f-f918-427f-bad4-520ded46d858"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.867541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4921858-b22b-474b-b8fb-6ccbd97bffac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.898863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvr5r\" (UniqueName: \"kubernetes.io/projected/b4921858-b22b-474b-b8fb-6ccbd97bffac-kube-api-access-vvr5r\") pod \"nova-scheduler-0\" (UID: \"b4921858-b22b-474b-b8fb-6ccbd97bffac\") " pod="openstack/nova-scheduler-0" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.948617 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.956924 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9659389f-f918-427f-bad4-520ded46d858" (UID: "9659389f-f918-427f-bad4-520ded46d858"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:31:20 crc kubenswrapper[4760]: I1125 08:31:20.960373 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f972542f-a24d-4356-b5e1-2c3bbb87872f" path="/var/lib/kubelet/pods/f972542f-a24d-4356-b5e1-2c3bbb87872f/volumes" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.049975 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9659389f-f918-427f-bad4-520ded46d858-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.112085 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.584543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9659389f-f918-427f-bad4-520ded46d858","Type":"ContainerDied","Data":"0d5748d4f6a2f24e5f4d4fa5e11eff5bfc254229a640a00c1fff4cba554c982c"} Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.585072 4760 scope.go:117] "RemoveContainer" containerID="7d1cb2edfb16ed656efcfcefd1d64130a310900faedb87172dac7a52d153f722" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.584571 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.605391 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.612553 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.617862 4760 scope.go:117] "RemoveContainer" containerID="245a178b2930d0ff51be2883c296c219579133dd773354559ff1a983577f56ff" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.641735 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.664977 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:21 crc kubenswrapper[4760]: E1125 08:31:21.665785 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-api" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.665808 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-api" Nov 25 08:31:21 crc kubenswrapper[4760]: E1125 08:31:21.665842 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-log" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.665852 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-log" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.666112 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-api" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.666128 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9659389f-f918-427f-bad4-520ded46d858" containerName="nova-api-log" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.667829 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.672030 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.672275 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.672433 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.672433 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.863904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.864016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-public-tls-certs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.864361 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftgh8\" (UniqueName: \"kubernetes.io/projected/32c2adbb-f391-45e9-b20b-db6f61f927eb-kube-api-access-ftgh8\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.864492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.864550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-config-data\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.864580 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c2adbb-f391-45e9-b20b-db6f61f927eb-logs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.966984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-config-data\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.967066 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c2adbb-f391-45e9-b20b-db6f61f927eb-logs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.967122 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.967814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-public-tls-certs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.967995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftgh8\" (UniqueName: \"kubernetes.io/projected/32c2adbb-f391-45e9-b20b-db6f61f927eb-kube-api-access-ftgh8\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.968080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.970240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32c2adbb-f391-45e9-b20b-db6f61f927eb-logs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.974025 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.974383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.975593 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-config-data\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.975837 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32c2adbb-f391-45e9-b20b-db6f61f927eb-public-tls-certs\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:21 crc kubenswrapper[4760]: I1125 08:31:21.986169 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftgh8\" (UniqueName: \"kubernetes.io/projected/32c2adbb-f391-45e9-b20b-db6f61f927eb-kube-api-access-ftgh8\") pod \"nova-api-0\" (UID: \"32c2adbb-f391-45e9-b20b-db6f61f927eb\") " pod="openstack/nova-api-0" Nov 25 08:31:22 crc kubenswrapper[4760]: I1125 08:31:22.049461 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 08:31:22 crc kubenswrapper[4760]: I1125 08:31:22.544214 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 08:31:22 crc kubenswrapper[4760]: W1125 08:31:22.559156 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32c2adbb_f391_45e9_b20b_db6f61f927eb.slice/crio-8299ca96b8628c86cfac36fc5433df4f908a06b5afdacd83b5e9f58b07e32831 WatchSource:0}: Error finding container 8299ca96b8628c86cfac36fc5433df4f908a06b5afdacd83b5e9f58b07e32831: Status 404 returned error can't find the container with id 8299ca96b8628c86cfac36fc5433df4f908a06b5afdacd83b5e9f58b07e32831 Nov 25 08:31:22 crc kubenswrapper[4760]: I1125 08:31:22.598599 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4921858-b22b-474b-b8fb-6ccbd97bffac","Type":"ContainerStarted","Data":"ba029a51e5732574ffc92980340fc0c6b2b3c18e24dd67ddb2aa0692b9ad9d4f"} Nov 25 08:31:22 crc kubenswrapper[4760]: I1125 08:31:22.598655 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4921858-b22b-474b-b8fb-6ccbd97bffac","Type":"ContainerStarted","Data":"2c78c39d3cb77713fefdc972e11a6e9feedfeb851c60e0583fa00d21a821b468"} Nov 25 08:31:22 crc kubenswrapper[4760]: I1125 08:31:22.600012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32c2adbb-f391-45e9-b20b-db6f61f927eb","Type":"ContainerStarted","Data":"8299ca96b8628c86cfac36fc5433df4f908a06b5afdacd83b5e9f58b07e32831"} Nov 25 08:31:22 crc kubenswrapper[4760]: I1125 08:31:22.617407 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.617380872 podStartE2EDuration="2.617380872s" podCreationTimestamp="2025-11-25 08:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:31:22.614743926 +0000 UTC m=+1216.323774741" watchObservedRunningTime="2025-11-25 08:31:22.617380872 +0000 UTC m=+1216.326411667" Nov 25 08:31:22 crc kubenswrapper[4760]: I1125 08:31:22.953421 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9659389f-f918-427f-bad4-520ded46d858" path="/var/lib/kubelet/pods/9659389f-f918-427f-bad4-520ded46d858/volumes" Nov 25 08:31:23 crc kubenswrapper[4760]: I1125 08:31:23.611863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32c2adbb-f391-45e9-b20b-db6f61f927eb","Type":"ContainerStarted","Data":"3af20547efc10c2b374c4482ca61839d3abe3806f678bdca3838bc32543b339d"} Nov 25 08:31:23 crc kubenswrapper[4760]: I1125 08:31:23.611933 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32c2adbb-f391-45e9-b20b-db6f61f927eb","Type":"ContainerStarted","Data":"893a53311410bd8e90705cd6b0999348fe7912bd0ea64f510ea5405db46c07f7"} Nov 25 08:31:23 crc kubenswrapper[4760]: I1125 08:31:23.638662 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.638640854 podStartE2EDuration="2.638640854s" podCreationTimestamp="2025-11-25 08:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:31:23.627980947 +0000 UTC m=+1217.337011792" watchObservedRunningTime="2025-11-25 08:31:23.638640854 +0000 UTC m=+1217.347671649" Nov 25 08:31:24 crc kubenswrapper[4760]: I1125 08:31:24.226742 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 08:31:24 crc kubenswrapper[4760]: I1125 08:31:24.226824 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 08:31:26 crc kubenswrapper[4760]: I1125 08:31:26.112969 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 08:31:29 crc kubenswrapper[4760]: I1125 08:31:29.226971 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 08:31:29 crc kubenswrapper[4760]: I1125 08:31:29.227511 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 08:31:30 crc kubenswrapper[4760]: I1125 08:31:30.239479 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cf7e8b89-ff82-471f-9255-d3268551c726" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 08:31:30 crc kubenswrapper[4760]: I1125 08:31:30.239907 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cf7e8b89-ff82-471f-9255-d3268551c726" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 08:31:31 crc kubenswrapper[4760]: I1125 08:31:31.112519 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 08:31:31 crc kubenswrapper[4760]: I1125 08:31:31.140911 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 08:31:31 crc kubenswrapper[4760]: I1125 08:31:31.721853 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 08:31:31 crc kubenswrapper[4760]: I1125 08:31:31.747199 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:31:31 crc kubenswrapper[4760]: I1125 08:31:31.747273 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:31:32 crc kubenswrapper[4760]: I1125 08:31:32.051938 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:31:32 crc kubenswrapper[4760]: I1125 08:31:32.051985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 08:31:33 crc kubenswrapper[4760]: I1125 08:31:33.063617 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32c2adbb-f391-45e9-b20b-db6f61f927eb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 08:31:33 crc kubenswrapper[4760]: I1125 08:31:33.063905 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32c2adbb-f391-45e9-b20b-db6f61f927eb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 08:31:37 crc kubenswrapper[4760]: I1125 08:31:37.879587 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 08:31:39 crc kubenswrapper[4760]: I1125 08:31:39.235558 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 08:31:39 crc kubenswrapper[4760]: I1125 08:31:39.236076 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 08:31:39 crc kubenswrapper[4760]: I1125 08:31:39.243661 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 08:31:39 crc kubenswrapper[4760]: I1125 08:31:39.244920 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 08:31:42 crc kubenswrapper[4760]: I1125 08:31:42.061828 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 08:31:42 crc kubenswrapper[4760]: I1125 08:31:42.063129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 08:31:42 crc kubenswrapper[4760]: I1125 08:31:42.063175 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 08:31:42 crc kubenswrapper[4760]: I1125 08:31:42.071787 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 08:31:42 crc kubenswrapper[4760]: I1125 08:31:42.787402 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 08:31:42 crc kubenswrapper[4760]: I1125 08:31:42.796656 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 08:31:51 crc kubenswrapper[4760]: I1125 08:31:51.384813 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:31:52 crc kubenswrapper[4760]: I1125 08:31:52.405778 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:31:55 crc kubenswrapper[4760]: I1125 08:31:55.429508 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerName="rabbitmq" containerID="cri-o://a5b5032f75202681ff15f1849a5603fa93e68299a1d6ea58a8f9e77727a67d66" gracePeriod=604796 Nov 25 08:31:56 crc kubenswrapper[4760]: I1125 08:31:56.477313 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerName="rabbitmq" containerID="cri-o://cf2fa34095cd9cb121b2ff90fc68810c7964cd3310d3a4b05a29a8049971b547" gracePeriod=604796 Nov 25 08:31:58 crc kubenswrapper[4760]: I1125 08:31:58.052057 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 25 08:31:58 crc kubenswrapper[4760]: I1125 08:31:58.330911 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.746276 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.746583 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.746635 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.747437 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0ea7124286527d9806dc0c775161bbfad1ddc74c136f4d8ca77bb8bd02e22cc"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.747514 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://d0ea7124286527d9806dc0c775161bbfad1ddc74c136f4d8ca77bb8bd02e22cc" gracePeriod=600 Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.957641 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerID="a5b5032f75202681ff15f1849a5603fa93e68299a1d6ea58a8f9e77727a67d66" exitCode=0 Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.957730 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d","Type":"ContainerDied","Data":"a5b5032f75202681ff15f1849a5603fa93e68299a1d6ea58a8f9e77727a67d66"} Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.958116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d","Type":"ContainerDied","Data":"691b100e8e6f8fe5e823a797a22ad1951f8a75ca1102491ff81d1c6336ead85c"} Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.958132 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691b100e8e6f8fe5e823a797a22ad1951f8a75ca1102491ff81d1c6336ead85c" Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.960589 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="d0ea7124286527d9806dc0c775161bbfad1ddc74c136f4d8ca77bb8bd02e22cc" exitCode=0 Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.960622 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"d0ea7124286527d9806dc0c775161bbfad1ddc74c136f4d8ca77bb8bd02e22cc"} Nov 25 08:32:01 crc kubenswrapper[4760]: I1125 08:32:01.960644 4760 scope.go:117] "RemoveContainer" containerID="b9e0ecc3c247b6af19eb122bc74a94901ef917b6bb9d5aef56c5a3aafb61bcb8" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.010990 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.064214 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-pod-info\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.064390 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-kube-api-access-q996c\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.064425 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-server-conf\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.065542 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-tls\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.065907 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.066174 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-config-data\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.066219 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-erlang-cookie-secret\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.066262 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-erlang-cookie\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.066325 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-plugins-conf\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.066385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-plugins\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.066494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-confd\") pod \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\" (UID: \"0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d\") " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.071965 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.072439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.073444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.097918 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-pod-info" (OuterVolumeSpecName: "pod-info") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.102267 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.109485 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.109613 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-kube-api-access-q996c" (OuterVolumeSpecName: "kube-api-access-q996c") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "kube-api-access-q996c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.116475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.139132 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-config-data" (OuterVolumeSpecName: "config-data") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.168961 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-server-conf" (OuterVolumeSpecName: "server-conf") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169194 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169216 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-kube-api-access-q996c\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169228 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169236 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169271 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169280 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169289 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169301 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169310 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.169318 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.188397 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.258661 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" (UID: "0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.271152 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.271184 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.979314 4760 generic.go:334] "Generic (PLEG): container finished" podID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerID="cf2fa34095cd9cb121b2ff90fc68810c7964cd3310d3a4b05a29a8049971b547" exitCode=0 Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.979364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1de21d0-f4de-4294-a1b0-ec1328f46531","Type":"ContainerDied","Data":"cf2fa34095cd9cb121b2ff90fc68810c7964cd3310d3a4b05a29a8049971b547"} Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.982282 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"ca52788d396deaeb74b41a0b267f55e1f30d7a61af988b5f3d847e16dbb9f1b0"} Nov 25 08:32:02 crc kubenswrapper[4760]: I1125 08:32:02.982371 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.086319 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.104153 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.130779 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191327 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191472 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1de21d0-f4de-4294-a1b0-ec1328f46531-erlang-cookie-secret\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191541 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-server-conf\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191638 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-erlang-cookie\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191792 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-confd\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1de21d0-f4de-4294-a1b0-ec1328f46531-pod-info\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-plugins-conf\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.191981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-tls\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.192036 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-config-data\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.192120 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-plugins\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.192144 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6dl6\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-kube-api-access-k6dl6\") pod \"a1de21d0-f4de-4294-a1b0-ec1328f46531\" (UID: \"a1de21d0-f4de-4294-a1b0-ec1328f46531\") " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.201804 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.201945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.207656 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.213982 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:32:03 crc kubenswrapper[4760]: E1125 08:32:03.214467 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerName="rabbitmq" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.214489 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerName="rabbitmq" Nov 25 08:32:03 crc kubenswrapper[4760]: E1125 08:32:03.214514 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerName="rabbitmq" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.214521 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerName="rabbitmq" Nov 25 08:32:03 crc kubenswrapper[4760]: E1125 08:32:03.214533 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerName="setup-container" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.214538 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerName="setup-container" Nov 25 08:32:03 crc kubenswrapper[4760]: E1125 08:32:03.214557 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerName="setup-container" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.214563 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerName="setup-container" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.216710 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" containerName="rabbitmq" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.216743 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" containerName="rabbitmq" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.217819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.227914 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.228325 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.228756 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.228907 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.229014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.229147 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.229263 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.229377 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mgpb7" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.233210 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.245753 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.250823 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-kube-api-access-k6dl6" (OuterVolumeSpecName: "kube-api-access-k6dl6") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "kube-api-access-k6dl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.250999 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a1de21d0-f4de-4294-a1b0-ec1328f46531-pod-info" (OuterVolumeSpecName: "pod-info") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.251178 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1de21d0-f4de-4294-a1b0-ec1328f46531-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.254327 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-config-data" (OuterVolumeSpecName: "config-data") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.285016 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-server-conf" (OuterVolumeSpecName: "server-conf") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295428 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295504 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295566 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-config-data\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac940436-7641-4872-8ab1-f6e0aca87e80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295743 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac940436-7641-4872-8ab1-f6e0aca87e80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qq99\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-kube-api-access-7qq99\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295814 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295880 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.295998 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296236 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296280 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296290 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296300 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6dl6\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-kube-api-access-k6dl6\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296328 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296338 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a1de21d0-f4de-4294-a1b0-ec1328f46531-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296347 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296356 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296365 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a1de21d0-f4de-4294-a1b0-ec1328f46531-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.296375 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a1de21d0-f4de-4294-a1b0-ec1328f46531-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.319963 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.380522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a1de21d0-f4de-4294-a1b0-ec1328f46531" (UID: "a1de21d0-f4de-4294-a1b0-ec1328f46531"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.397893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.397955 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-config-data\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac940436-7641-4872-8ab1-f6e0aca87e80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398111 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac940436-7641-4872-8ab1-f6e0aca87e80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qq99\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-kube-api-access-7qq99\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398154 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398170 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398223 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398331 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.398346 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a1de21d0-f4de-4294-a1b0-ec1328f46531-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.399115 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.399155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.399478 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.399660 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.401711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.402943 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ac940436-7641-4872-8ab1-f6e0aca87e80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.403341 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac940436-7641-4872-8ab1-f6e0aca87e80-config-data\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.405099 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.405572 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ac940436-7641-4872-8ab1-f6e0aca87e80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.406493 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.422066 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qq99\" (UniqueName: \"kubernetes.io/projected/ac940436-7641-4872-8ab1-f6e0aca87e80-kube-api-access-7qq99\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.431519 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"ac940436-7641-4872-8ab1-f6e0aca87e80\") " pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.578817 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.991369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a1de21d0-f4de-4294-a1b0-ec1328f46531","Type":"ContainerDied","Data":"72b0f80d3920b470a033103c26b36a33d50cf57658bee19acb2b6e1deb131c00"} Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.991391 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:03 crc kubenswrapper[4760]: I1125 08:32:03.991712 4760 scope.go:117] "RemoveContainer" containerID="cf2fa34095cd9cb121b2ff90fc68810c7964cd3310d3a4b05a29a8049971b547" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.032915 4760 scope.go:117] "RemoveContainer" containerID="e0f65cbf20b69fcac39954194d3b9cfcddfcddfc66fab1a7b56132d9e8e38deb" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.039721 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.064917 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.083557 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.085556 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.093035 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.093369 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.093506 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.093671 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.093696 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.093944 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mhr6s" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.094009 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.094239 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.097770 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.114947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vx9p\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-kube-api-access-5vx9p\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115124 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54c05cca-ddf1-4567-b30b-f770bd6b6704-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115153 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115333 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115409 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54c05cca-ddf1-4567-b30b-f770bd6b6704-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.115515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54c05cca-ddf1-4567-b30b-f770bd6b6704-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216754 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vx9p\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-kube-api-access-5vx9p\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54c05cca-ddf1-4567-b30b-f770bd6b6704-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.216848 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.217926 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.218089 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.218680 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.219274 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.219538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.220769 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54c05cca-ddf1-4567-b30b-f770bd6b6704-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.222544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54c05cca-ddf1-4567-b30b-f770bd6b6704-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.223325 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.223522 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54c05cca-ddf1-4567-b30b-f770bd6b6704-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.223552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.236681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vx9p\" (UniqueName: \"kubernetes.io/projected/54c05cca-ddf1-4567-b30b-f770bd6b6704-kube-api-access-5vx9p\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.241668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"54c05cca-ddf1-4567-b30b-f770bd6b6704\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.295948 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.798726 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 08:32:04 crc kubenswrapper[4760]: W1125 08:32:04.805484 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54c05cca_ddf1_4567_b30b_f770bd6b6704.slice/crio-0bd8719fd876198cfe542bd9397771be0557ac84d3c13519a41be88bdfe86b9a WatchSource:0}: Error finding container 0bd8719fd876198cfe542bd9397771be0557ac84d3c13519a41be88bdfe86b9a: Status 404 returned error can't find the container with id 0bd8719fd876198cfe542bd9397771be0557ac84d3c13519a41be88bdfe86b9a Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.957302 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d" path="/var/lib/kubelet/pods/0f4df0a4-e5ad-47f2-a8e9-44a800f24a2d/volumes" Nov 25 08:32:04 crc kubenswrapper[4760]: I1125 08:32:04.958750 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1de21d0-f4de-4294-a1b0-ec1328f46531" path="/var/lib/kubelet/pods/a1de21d0-f4de-4294-a1b0-ec1328f46531/volumes" Nov 25 08:32:05 crc kubenswrapper[4760]: I1125 08:32:05.001588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ac940436-7641-4872-8ab1-f6e0aca87e80","Type":"ContainerStarted","Data":"e0769e85115e92e87055fd156487d53983114a2da802502f545d7b6a58c37c42"} Nov 25 08:32:05 crc kubenswrapper[4760]: I1125 08:32:05.002519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54c05cca-ddf1-4567-b30b-f770bd6b6704","Type":"ContainerStarted","Data":"0bd8719fd876198cfe542bd9397771be0557ac84d3c13519a41be88bdfe86b9a"} Nov 25 08:32:06 crc kubenswrapper[4760]: I1125 08:32:06.016067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ac940436-7641-4872-8ab1-f6e0aca87e80","Type":"ContainerStarted","Data":"8276e6a5a5a7bc9aea689a074c0270b2fbe6c9b0058203b0349f513f215a2db8"} Nov 25 08:32:06 crc kubenswrapper[4760]: I1125 08:32:06.905171 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568675b579-bhtw4"] Nov 25 08:32:06 crc kubenswrapper[4760]: I1125 08:32:06.911616 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:06 crc kubenswrapper[4760]: I1125 08:32:06.913605 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 08:32:06 crc kubenswrapper[4760]: I1125 08:32:06.933818 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568675b579-bhtw4"] Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.016550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-config\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.016623 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-sb\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.016717 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-dns-svc\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.016771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-nb\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.016791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xf7g\" (UniqueName: \"kubernetes.io/projected/3c3ed463-1fc2-49ca-a374-754949519d5e-kube-api-access-8xf7g\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.016809 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-openstack-edpm-ipam\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.028110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54c05cca-ddf1-4567-b30b-f770bd6b6704","Type":"ContainerStarted","Data":"7a8d8750596c1b614ce8320e552c53a16ca63c3e8c4722f015ddbb239ab4d8a1"} Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.119270 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-config\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.119378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-sb\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.119564 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-dns-svc\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.120405 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-nb\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.120534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xf7g\" (UniqueName: \"kubernetes.io/projected/3c3ed463-1fc2-49ca-a374-754949519d5e-kube-api-access-8xf7g\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.120701 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-openstack-edpm-ipam\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.120915 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-sb\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.121026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-dns-svc\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.121179 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-nb\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.121451 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-openstack-edpm-ipam\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.121608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-config\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.141043 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xf7g\" (UniqueName: \"kubernetes.io/projected/3c3ed463-1fc2-49ca-a374-754949519d5e-kube-api-access-8xf7g\") pod \"dnsmasq-dns-568675b579-bhtw4\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.243065 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:07 crc kubenswrapper[4760]: I1125 08:32:07.749163 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568675b579-bhtw4"] Nov 25 08:32:08 crc kubenswrapper[4760]: I1125 08:32:08.046777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568675b579-bhtw4" event={"ID":"3c3ed463-1fc2-49ca-a374-754949519d5e","Type":"ContainerStarted","Data":"589668da8bc603f460e5cd78fa79cf8eb74f77d60ff6d5840b0d1c2c4b45e3a4"} Nov 25 08:32:08 crc kubenswrapper[4760]: I1125 08:32:08.046856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568675b579-bhtw4" event={"ID":"3c3ed463-1fc2-49ca-a374-754949519d5e","Type":"ContainerStarted","Data":"698f40931e898b30c432a5dd48eb42bb2458fa7dc2d6219af1aa022a639c281a"} Nov 25 08:32:09 crc kubenswrapper[4760]: I1125 08:32:09.059078 4760 generic.go:334] "Generic (PLEG): container finished" podID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerID="589668da8bc603f460e5cd78fa79cf8eb74f77d60ff6d5840b0d1c2c4b45e3a4" exitCode=0 Nov 25 08:32:09 crc kubenswrapper[4760]: I1125 08:32:09.059228 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568675b579-bhtw4" event={"ID":"3c3ed463-1fc2-49ca-a374-754949519d5e","Type":"ContainerDied","Data":"589668da8bc603f460e5cd78fa79cf8eb74f77d60ff6d5840b0d1c2c4b45e3a4"} Nov 25 08:32:10 crc kubenswrapper[4760]: I1125 08:32:10.071077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568675b579-bhtw4" event={"ID":"3c3ed463-1fc2-49ca-a374-754949519d5e","Type":"ContainerStarted","Data":"5ed03f752d7b8a1c02a2d4cbd80f7d9f9f088dbf629c5d30065efa4d89169160"} Nov 25 08:32:10 crc kubenswrapper[4760]: I1125 08:32:10.071392 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:10 crc kubenswrapper[4760]: I1125 08:32:10.095537 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568675b579-bhtw4" podStartSLOduration=4.09551698 podStartE2EDuration="4.09551698s" podCreationTimestamp="2025-11-25 08:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:32:10.089656131 +0000 UTC m=+1263.798686936" watchObservedRunningTime="2025-11-25 08:32:10.09551698 +0000 UTC m=+1263.804547785" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.244839 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.327520 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9b558957-mx6l9"] Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.327747 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" podUID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerName="dnsmasq-dns" containerID="cri-o://cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c" gracePeriod=10 Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.481735 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dc44c56c-4dzcm"] Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.483547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.503431 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc44c56c-4dzcm"] Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.626110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.626355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2b97\" (UniqueName: \"kubernetes.io/projected/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-kube-api-access-n2b97\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.626683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-dns-svc\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.626857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.627022 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-config\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.627158 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.728530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.728592 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2b97\" (UniqueName: \"kubernetes.io/projected/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-kube-api-access-n2b97\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.728623 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-dns-svc\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.728666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.728709 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-config\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.728745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.729690 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.730391 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-dns-svc\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.730430 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-config\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.730608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.731260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.767156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2b97\" (UniqueName: \"kubernetes.io/projected/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-kube-api-access-n2b97\") pod \"dnsmasq-dns-6dc44c56c-4dzcm\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.825297 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:17 crc kubenswrapper[4760]: I1125 08:32:17.909070 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.043624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-config\") pod \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.043975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm7kf\" (UniqueName: \"kubernetes.io/projected/dcc85bf3-1602-4530-ab07-c3f12b365f5e-kube-api-access-cm7kf\") pod \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.044021 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-dns-svc\") pod \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.044188 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-nb\") pod \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.044290 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-sb\") pod \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\" (UID: \"dcc85bf3-1602-4530-ab07-c3f12b365f5e\") " Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.052712 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc85bf3-1602-4530-ab07-c3f12b365f5e-kube-api-access-cm7kf" (OuterVolumeSpecName: "kube-api-access-cm7kf") pod "dcc85bf3-1602-4530-ab07-c3f12b365f5e" (UID: "dcc85bf3-1602-4530-ab07-c3f12b365f5e"). InnerVolumeSpecName "kube-api-access-cm7kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.138240 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-config" (OuterVolumeSpecName: "config") pod "dcc85bf3-1602-4530-ab07-c3f12b365f5e" (UID: "dcc85bf3-1602-4530-ab07-c3f12b365f5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.146627 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.146659 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm7kf\" (UniqueName: \"kubernetes.io/projected/dcc85bf3-1602-4530-ab07-c3f12b365f5e-kube-api-access-cm7kf\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.146886 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcc85bf3-1602-4530-ab07-c3f12b365f5e" (UID: "dcc85bf3-1602-4530-ab07-c3f12b365f5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.147676 4760 generic.go:334] "Generic (PLEG): container finished" podID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerID="cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c" exitCode=0 Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.147786 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" event={"ID":"dcc85bf3-1602-4530-ab07-c3f12b365f5e","Type":"ContainerDied","Data":"cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c"} Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.147879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" event={"ID":"dcc85bf3-1602-4530-ab07-c3f12b365f5e","Type":"ContainerDied","Data":"1e2e89a06f8aa0cba9a96a04d3a8d6661ff2279fa59014853775d30b194cbd09"} Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.147974 4760 scope.go:117] "RemoveContainer" containerID="cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.148192 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9b558957-mx6l9" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.164035 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcc85bf3-1602-4530-ab07-c3f12b365f5e" (UID: "dcc85bf3-1602-4530-ab07-c3f12b365f5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.164801 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcc85bf3-1602-4530-ab07-c3f12b365f5e" (UID: "dcc85bf3-1602-4530-ab07-c3f12b365f5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.174389 4760 scope.go:117] "RemoveContainer" containerID="fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.246396 4760 scope.go:117] "RemoveContainer" containerID="cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c" Nov 25 08:32:18 crc kubenswrapper[4760]: E1125 08:32:18.246942 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c\": container with ID starting with cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c not found: ID does not exist" containerID="cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.246985 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c"} err="failed to get container status \"cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c\": rpc error: code = NotFound desc = could not find container \"cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c\": container with ID starting with cd9e43345e6a95699f775ea7aa2311d880c9131e9589581d5ba74710132e998c not found: ID does not exist" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.247042 4760 scope.go:117] "RemoveContainer" containerID="fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8" Nov 25 08:32:18 crc kubenswrapper[4760]: E1125 08:32:18.247518 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8\": container with ID starting with fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8 not found: ID does not exist" containerID="fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.247559 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8"} err="failed to get container status \"fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8\": rpc error: code = NotFound desc = could not find container \"fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8\": container with ID starting with fad2960c493f29cb172cc992c7527ec1fa10ff93fe7646855719d5d703cf3ce8 not found: ID does not exist" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.247962 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.247991 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.248000 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc85bf3-1602-4530-ab07-c3f12b365f5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:18 crc kubenswrapper[4760]: W1125 08:32:18.396158 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e51547_0b08_4cb3_8a61_1ecfc452fbdb.slice/crio-c02877b0d39e2472098f1ff5b4c12fe3a6b619e599f045a616ec8d6c56644aab WatchSource:0}: Error finding container c02877b0d39e2472098f1ff5b4c12fe3a6b619e599f045a616ec8d6c56644aab: Status 404 returned error can't find the container with id c02877b0d39e2472098f1ff5b4c12fe3a6b619e599f045a616ec8d6c56644aab Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.396396 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc44c56c-4dzcm"] Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.578765 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9b558957-mx6l9"] Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.587906 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c9b558957-mx6l9"] Nov 25 08:32:18 crc kubenswrapper[4760]: I1125 08:32:18.954439 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" path="/var/lib/kubelet/pods/dcc85bf3-1602-4530-ab07-c3f12b365f5e/volumes" Nov 25 08:32:19 crc kubenswrapper[4760]: I1125 08:32:19.159438 4760 generic.go:334] "Generic (PLEG): container finished" podID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerID="cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56" exitCode=0 Nov 25 08:32:19 crc kubenswrapper[4760]: I1125 08:32:19.159499 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" event={"ID":"27e51547-0b08-4cb3-8a61-1ecfc452fbdb","Type":"ContainerDied","Data":"cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56"} Nov 25 08:32:19 crc kubenswrapper[4760]: I1125 08:32:19.159536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" event={"ID":"27e51547-0b08-4cb3-8a61-1ecfc452fbdb","Type":"ContainerStarted","Data":"c02877b0d39e2472098f1ff5b4c12fe3a6b619e599f045a616ec8d6c56644aab"} Nov 25 08:32:20 crc kubenswrapper[4760]: I1125 08:32:20.170861 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" event={"ID":"27e51547-0b08-4cb3-8a61-1ecfc452fbdb","Type":"ContainerStarted","Data":"9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd"} Nov 25 08:32:20 crc kubenswrapper[4760]: I1125 08:32:20.171347 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:20 crc kubenswrapper[4760]: I1125 08:32:20.206279 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" podStartSLOduration=3.206238967 podStartE2EDuration="3.206238967s" podCreationTimestamp="2025-11-25 08:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:32:20.195485307 +0000 UTC m=+1273.904516142" watchObservedRunningTime="2025-11-25 08:32:20.206238967 +0000 UTC m=+1273.915269762" Nov 25 08:32:27 crc kubenswrapper[4760]: I1125 08:32:27.827489 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 08:32:27 crc kubenswrapper[4760]: I1125 08:32:27.895317 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568675b579-bhtw4"] Nov 25 08:32:27 crc kubenswrapper[4760]: I1125 08:32:27.895591 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568675b579-bhtw4" podUID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerName="dnsmasq-dns" containerID="cri-o://5ed03f752d7b8a1c02a2d4cbd80f7d9f9f088dbf629c5d30065efa4d89169160" gracePeriod=10 Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.258358 4760 generic.go:334] "Generic (PLEG): container finished" podID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerID="5ed03f752d7b8a1c02a2d4cbd80f7d9f9f088dbf629c5d30065efa4d89169160" exitCode=0 Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.258696 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568675b579-bhtw4" event={"ID":"3c3ed463-1fc2-49ca-a374-754949519d5e","Type":"ContainerDied","Data":"5ed03f752d7b8a1c02a2d4cbd80f7d9f9f088dbf629c5d30065efa4d89169160"} Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.380492 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.551555 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-openstack-edpm-ipam\") pod \"3c3ed463-1fc2-49ca-a374-754949519d5e\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.551994 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-sb\") pod \"3c3ed463-1fc2-49ca-a374-754949519d5e\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.552020 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-config\") pod \"3c3ed463-1fc2-49ca-a374-754949519d5e\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.552068 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-nb\") pod \"3c3ed463-1fc2-49ca-a374-754949519d5e\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.552120 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xf7g\" (UniqueName: \"kubernetes.io/projected/3c3ed463-1fc2-49ca-a374-754949519d5e-kube-api-access-8xf7g\") pod \"3c3ed463-1fc2-49ca-a374-754949519d5e\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.552143 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-dns-svc\") pod \"3c3ed463-1fc2-49ca-a374-754949519d5e\" (UID: \"3c3ed463-1fc2-49ca-a374-754949519d5e\") " Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.561656 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3ed463-1fc2-49ca-a374-754949519d5e-kube-api-access-8xf7g" (OuterVolumeSpecName: "kube-api-access-8xf7g") pod "3c3ed463-1fc2-49ca-a374-754949519d5e" (UID: "3c3ed463-1fc2-49ca-a374-754949519d5e"). InnerVolumeSpecName "kube-api-access-8xf7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.600838 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-config" (OuterVolumeSpecName: "config") pod "3c3ed463-1fc2-49ca-a374-754949519d5e" (UID: "3c3ed463-1fc2-49ca-a374-754949519d5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.601852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c3ed463-1fc2-49ca-a374-754949519d5e" (UID: "3c3ed463-1fc2-49ca-a374-754949519d5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.609818 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c3ed463-1fc2-49ca-a374-754949519d5e" (UID: "3c3ed463-1fc2-49ca-a374-754949519d5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.612027 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c3ed463-1fc2-49ca-a374-754949519d5e" (UID: "3c3ed463-1fc2-49ca-a374-754949519d5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.620929 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3c3ed463-1fc2-49ca-a374-754949519d5e" (UID: "3c3ed463-1fc2-49ca-a374-754949519d5e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.654934 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.654982 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.654993 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.655002 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.655011 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xf7g\" (UniqueName: \"kubernetes.io/projected/3c3ed463-1fc2-49ca-a374-754949519d5e-kube-api-access-8xf7g\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:28 crc kubenswrapper[4760]: I1125 08:32:28.655021 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c3ed463-1fc2-49ca-a374-754949519d5e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:29 crc kubenswrapper[4760]: I1125 08:32:29.271627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568675b579-bhtw4" event={"ID":"3c3ed463-1fc2-49ca-a374-754949519d5e","Type":"ContainerDied","Data":"698f40931e898b30c432a5dd48eb42bb2458fa7dc2d6219af1aa022a639c281a"} Nov 25 08:32:29 crc kubenswrapper[4760]: I1125 08:32:29.271738 4760 scope.go:117] "RemoveContainer" containerID="5ed03f752d7b8a1c02a2d4cbd80f7d9f9f088dbf629c5d30065efa4d89169160" Nov 25 08:32:29 crc kubenswrapper[4760]: I1125 08:32:29.274570 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568675b579-bhtw4" Nov 25 08:32:29 crc kubenswrapper[4760]: I1125 08:32:29.303590 4760 scope.go:117] "RemoveContainer" containerID="589668da8bc603f460e5cd78fa79cf8eb74f77d60ff6d5840b0d1c2c4b45e3a4" Nov 25 08:32:29 crc kubenswrapper[4760]: I1125 08:32:29.313320 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568675b579-bhtw4"] Nov 25 08:32:29 crc kubenswrapper[4760]: I1125 08:32:29.323661 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568675b579-bhtw4"] Nov 25 08:32:30 crc kubenswrapper[4760]: I1125 08:32:30.951145 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3ed463-1fc2-49ca-a374-754949519d5e" path="/var/lib/kubelet/pods/3c3ed463-1fc2-49ca-a374-754949519d5e/volumes" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.056440 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg"] Nov 25 08:32:38 crc kubenswrapper[4760]: E1125 08:32:38.057690 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerName="dnsmasq-dns" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.057718 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerName="dnsmasq-dns" Nov 25 08:32:38 crc kubenswrapper[4760]: E1125 08:32:38.057763 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerName="init" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.057775 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerName="init" Nov 25 08:32:38 crc kubenswrapper[4760]: E1125 08:32:38.057805 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerName="init" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.057816 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerName="init" Nov 25 08:32:38 crc kubenswrapper[4760]: E1125 08:32:38.057838 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerName="dnsmasq-dns" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.057849 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerName="dnsmasq-dns" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.058195 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc85bf3-1602-4530-ab07-c3f12b365f5e" containerName="dnsmasq-dns" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.058220 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3ed463-1fc2-49ca-a374-754949519d5e" containerName="dnsmasq-dns" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.059141 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.061516 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.061913 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.062165 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.068314 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg"] Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.070747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.224990 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.225081 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjzfm\" (UniqueName: \"kubernetes.io/projected/5cc5f52f-76f6-430c-a302-b6b36fc84462-kube-api-access-bjzfm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.225121 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.225158 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.326989 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.327169 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.327286 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjzfm\" (UniqueName: \"kubernetes.io/projected/5cc5f52f-76f6-430c-a302-b6b36fc84462-kube-api-access-bjzfm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.327320 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.334125 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.336796 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.336975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.353345 4760 generic.go:334] "Generic (PLEG): container finished" podID="ac940436-7641-4872-8ab1-f6e0aca87e80" containerID="8276e6a5a5a7bc9aea689a074c0270b2fbe6c9b0058203b0349f513f215a2db8" exitCode=0 Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.353449 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ac940436-7641-4872-8ab1-f6e0aca87e80","Type":"ContainerDied","Data":"8276e6a5a5a7bc9aea689a074c0270b2fbe6c9b0058203b0349f513f215a2db8"} Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.355935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjzfm\" (UniqueName: \"kubernetes.io/projected/5cc5f52f-76f6-430c-a302-b6b36fc84462-kube-api-access-bjzfm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.382641 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.911753 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg"] Nov 25 08:32:38 crc kubenswrapper[4760]: I1125 08:32:38.916564 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:32:39 crc kubenswrapper[4760]: I1125 08:32:39.369278 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" event={"ID":"5cc5f52f-76f6-430c-a302-b6b36fc84462","Type":"ContainerStarted","Data":"84acecbee4d196838c3e0a12bd77bfbde13f5636607ce26bc79107286747b1ca"} Nov 25 08:32:39 crc kubenswrapper[4760]: I1125 08:32:39.372392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ac940436-7641-4872-8ab1-f6e0aca87e80","Type":"ContainerStarted","Data":"74301c84d294bb64f0276a063d74ddcb9d5d3723b7fcf15248d2de89901bfe24"} Nov 25 08:32:39 crc kubenswrapper[4760]: I1125 08:32:39.372805 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 08:32:39 crc kubenswrapper[4760]: I1125 08:32:39.374368 4760 generic.go:334] "Generic (PLEG): container finished" podID="54c05cca-ddf1-4567-b30b-f770bd6b6704" containerID="7a8d8750596c1b614ce8320e552c53a16ca63c3e8c4722f015ddbb239ab4d8a1" exitCode=0 Nov 25 08:32:39 crc kubenswrapper[4760]: I1125 08:32:39.374392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54c05cca-ddf1-4567-b30b-f770bd6b6704","Type":"ContainerDied","Data":"7a8d8750596c1b614ce8320e552c53a16ca63c3e8c4722f015ddbb239ab4d8a1"} Nov 25 08:32:39 crc kubenswrapper[4760]: I1125 08:32:39.408406 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.408392523 podStartE2EDuration="36.408392523s" podCreationTimestamp="2025-11-25 08:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:32:39.406434586 +0000 UTC m=+1293.115465381" watchObservedRunningTime="2025-11-25 08:32:39.408392523 +0000 UTC m=+1293.117423318" Nov 25 08:32:40 crc kubenswrapper[4760]: I1125 08:32:40.387574 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"54c05cca-ddf1-4567-b30b-f770bd6b6704","Type":"ContainerStarted","Data":"0b83cc23f2455d7e028c9e515c87e21d4ca633febe0351f49b2ff058d819254b"} Nov 25 08:32:40 crc kubenswrapper[4760]: I1125 08:32:40.388135 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:32:40 crc kubenswrapper[4760]: I1125 08:32:40.414702 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.414685543 podStartE2EDuration="36.414685543s" podCreationTimestamp="2025-11-25 08:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:32:40.410582105 +0000 UTC m=+1294.119612930" watchObservedRunningTime="2025-11-25 08:32:40.414685543 +0000 UTC m=+1294.123716338" Nov 25 08:32:49 crc kubenswrapper[4760]: I1125 08:32:49.466791 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" event={"ID":"5cc5f52f-76f6-430c-a302-b6b36fc84462","Type":"ContainerStarted","Data":"cb5bfe930bb1aa2423cd184286df88746db2dd94ce1c2459557cc5f905dadda9"} Nov 25 08:32:49 crc kubenswrapper[4760]: I1125 08:32:49.487398 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" podStartSLOduration=1.977931791 podStartE2EDuration="11.487381956s" podCreationTimestamp="2025-11-25 08:32:38 +0000 UTC" firstStartedPulling="2025-11-25 08:32:38.916295006 +0000 UTC m=+1292.625325801" lastFinishedPulling="2025-11-25 08:32:48.425745171 +0000 UTC m=+1302.134775966" observedRunningTime="2025-11-25 08:32:49.483984468 +0000 UTC m=+1303.193015263" watchObservedRunningTime="2025-11-25 08:32:49.487381956 +0000 UTC m=+1303.196412751" Nov 25 08:32:53 crc kubenswrapper[4760]: I1125 08:32:53.583446 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 08:32:54 crc kubenswrapper[4760]: I1125 08:32:54.299453 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 08:33:00 crc kubenswrapper[4760]: I1125 08:33:00.575416 4760 generic.go:334] "Generic (PLEG): container finished" podID="5cc5f52f-76f6-430c-a302-b6b36fc84462" containerID="cb5bfe930bb1aa2423cd184286df88746db2dd94ce1c2459557cc5f905dadda9" exitCode=0 Nov 25 08:33:00 crc kubenswrapper[4760]: I1125 08:33:00.575988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" event={"ID":"5cc5f52f-76f6-430c-a302-b6b36fc84462","Type":"ContainerDied","Data":"cb5bfe930bb1aa2423cd184286df88746db2dd94ce1c2459557cc5f905dadda9"} Nov 25 08:33:01 crc kubenswrapper[4760]: I1125 08:33:01.972240 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.016442 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-repo-setup-combined-ca-bundle\") pod \"5cc5f52f-76f6-430c-a302-b6b36fc84462\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.016545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-inventory\") pod \"5cc5f52f-76f6-430c-a302-b6b36fc84462\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.016715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjzfm\" (UniqueName: \"kubernetes.io/projected/5cc5f52f-76f6-430c-a302-b6b36fc84462-kube-api-access-bjzfm\") pod \"5cc5f52f-76f6-430c-a302-b6b36fc84462\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.016746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-ssh-key\") pod \"5cc5f52f-76f6-430c-a302-b6b36fc84462\" (UID: \"5cc5f52f-76f6-430c-a302-b6b36fc84462\") " Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.022165 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cc5f52f-76f6-430c-a302-b6b36fc84462-kube-api-access-bjzfm" (OuterVolumeSpecName: "kube-api-access-bjzfm") pod "5cc5f52f-76f6-430c-a302-b6b36fc84462" (UID: "5cc5f52f-76f6-430c-a302-b6b36fc84462"). InnerVolumeSpecName "kube-api-access-bjzfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.023749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5cc5f52f-76f6-430c-a302-b6b36fc84462" (UID: "5cc5f52f-76f6-430c-a302-b6b36fc84462"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.046259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5cc5f52f-76f6-430c-a302-b6b36fc84462" (UID: "5cc5f52f-76f6-430c-a302-b6b36fc84462"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.053893 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-inventory" (OuterVolumeSpecName: "inventory") pod "5cc5f52f-76f6-430c-a302-b6b36fc84462" (UID: "5cc5f52f-76f6-430c-a302-b6b36fc84462"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.118482 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.118766 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjzfm\" (UniqueName: \"kubernetes.io/projected/5cc5f52f-76f6-430c-a302-b6b36fc84462-kube-api-access-bjzfm\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.118778 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.118787 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc5f52f-76f6-430c-a302-b6b36fc84462-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.595240 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" event={"ID":"5cc5f52f-76f6-430c-a302-b6b36fc84462","Type":"ContainerDied","Data":"84acecbee4d196838c3e0a12bd77bfbde13f5636607ce26bc79107286747b1ca"} Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.595298 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84acecbee4d196838c3e0a12bd77bfbde13f5636607ce26bc79107286747b1ca" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.595315 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.670215 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r"] Nov 25 08:33:02 crc kubenswrapper[4760]: E1125 08:33:02.670614 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc5f52f-76f6-430c-a302-b6b36fc84462" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.670633 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc5f52f-76f6-430c-a302-b6b36fc84462" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.670829 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc5f52f-76f6-430c-a302-b6b36fc84462" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.672668 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.675896 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.676804 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.677440 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.681318 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.682591 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r"] Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.731442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.731642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.731726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8j9k\" (UniqueName: \"kubernetes.io/projected/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-kube-api-access-g8j9k\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.731931 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.834065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.834144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.834214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.834266 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8j9k\" (UniqueName: \"kubernetes.io/projected/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-kube-api-access-g8j9k\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.838496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.838754 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.841859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.851554 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8j9k\" (UniqueName: \"kubernetes.io/projected/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-kube-api-access-g8j9k\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:02 crc kubenswrapper[4760]: I1125 08:33:02.996174 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:33:03 crc kubenswrapper[4760]: I1125 08:33:03.499576 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r"] Nov 25 08:33:03 crc kubenswrapper[4760]: W1125 08:33:03.502583 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58fc7a0f_f6c7_4604_94f1_7af9fe6439de.slice/crio-a4955bca80656526d903279fcdddcf7ecb1acee4a176a71b523cb61a1d78544a WatchSource:0}: Error finding container a4955bca80656526d903279fcdddcf7ecb1acee4a176a71b523cb61a1d78544a: Status 404 returned error can't find the container with id a4955bca80656526d903279fcdddcf7ecb1acee4a176a71b523cb61a1d78544a Nov 25 08:33:03 crc kubenswrapper[4760]: I1125 08:33:03.605375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" event={"ID":"58fc7a0f-f6c7-4604-94f1-7af9fe6439de","Type":"ContainerStarted","Data":"a4955bca80656526d903279fcdddcf7ecb1acee4a176a71b523cb61a1d78544a"} Nov 25 08:33:04 crc kubenswrapper[4760]: I1125 08:33:04.617164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" event={"ID":"58fc7a0f-f6c7-4604-94f1-7af9fe6439de","Type":"ContainerStarted","Data":"f232b828eb2d62e89694e9349d89ffbb63d2a688639461757ce281ea370a3b96"} Nov 25 08:33:29 crc kubenswrapper[4760]: I1125 08:33:29.736168 4760 scope.go:117] "RemoveContainer" containerID="e344a34eb86dae415ae484b86556afe426e052c7e44dbcac25a9241f90d819ba" Nov 25 08:33:29 crc kubenswrapper[4760]: I1125 08:33:29.769806 4760 scope.go:117] "RemoveContainer" containerID="c9b801485c25de17cda2dabe57e1991d03968731843b911e0241cbab2acadee2" Nov 25 08:33:29 crc kubenswrapper[4760]: I1125 08:33:29.828584 4760 scope.go:117] "RemoveContainer" containerID="a5b5032f75202681ff15f1849a5603fa93e68299a1d6ea58a8f9e77727a67d66" Nov 25 08:33:29 crc kubenswrapper[4760]: I1125 08:33:29.859398 4760 scope.go:117] "RemoveContainer" containerID="8b0f133493dddbd699c049cd7e3e2409af4216828301b50557f8d1c7dfacc1dc" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.425599 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" podStartSLOduration=81.98623787 podStartE2EDuration="1m22.425580256s" podCreationTimestamp="2025-11-25 08:33:02 +0000 UTC" firstStartedPulling="2025-11-25 08:33:03.50455845 +0000 UTC m=+1317.213589235" lastFinishedPulling="2025-11-25 08:33:03.943900816 +0000 UTC m=+1317.652931621" observedRunningTime="2025-11-25 08:33:04.64523425 +0000 UTC m=+1318.354265065" watchObservedRunningTime="2025-11-25 08:34:24.425580256 +0000 UTC m=+1398.134611051" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.427311 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-62tsb"] Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.430770 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.454155 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62tsb"] Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.504314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-catalog-content\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.504462 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479qj\" (UniqueName: \"kubernetes.io/projected/1842e725-5a7e-4852-a62a-ea244cbe3f74-kube-api-access-479qj\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.504496 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-utilities\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.605892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-catalog-content\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.605956 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479qj\" (UniqueName: \"kubernetes.io/projected/1842e725-5a7e-4852-a62a-ea244cbe3f74-kube-api-access-479qj\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.605980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-utilities\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.606540 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-utilities\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.606563 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-catalog-content\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.629374 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479qj\" (UniqueName: \"kubernetes.io/projected/1842e725-5a7e-4852-a62a-ea244cbe3f74-kube-api-access-479qj\") pod \"community-operators-62tsb\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:24 crc kubenswrapper[4760]: I1125 08:34:24.752286 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:25 crc kubenswrapper[4760]: I1125 08:34:25.298384 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62tsb"] Nov 25 08:34:25 crc kubenswrapper[4760]: I1125 08:34:25.510980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerStarted","Data":"44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76"} Nov 25 08:34:25 crc kubenswrapper[4760]: I1125 08:34:25.511043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerStarted","Data":"216b0c31c9c1dcc15eef6e4021687801f949255dd0c9dc96fe2b40ced2949f67"} Nov 25 08:34:26 crc kubenswrapper[4760]: I1125 08:34:26.519883 4760 generic.go:334] "Generic (PLEG): container finished" podID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerID="44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76" exitCode=0 Nov 25 08:34:26 crc kubenswrapper[4760]: I1125 08:34:26.519933 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerDied","Data":"44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76"} Nov 25 08:34:27 crc kubenswrapper[4760]: I1125 08:34:27.530041 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerStarted","Data":"2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f"} Nov 25 08:34:28 crc kubenswrapper[4760]: I1125 08:34:28.539503 4760 generic.go:334] "Generic (PLEG): container finished" podID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerID="2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f" exitCode=0 Nov 25 08:34:28 crc kubenswrapper[4760]: I1125 08:34:28.539593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerDied","Data":"2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f"} Nov 25 08:34:29 crc kubenswrapper[4760]: I1125 08:34:29.550782 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerStarted","Data":"6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841"} Nov 25 08:34:29 crc kubenswrapper[4760]: I1125 08:34:29.565930 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-62tsb" podStartSLOduration=2.875544491 podStartE2EDuration="5.565912305s" podCreationTimestamp="2025-11-25 08:34:24 +0000 UTC" firstStartedPulling="2025-11-25 08:34:26.521800565 +0000 UTC m=+1400.230831360" lastFinishedPulling="2025-11-25 08:34:29.212168369 +0000 UTC m=+1402.921199174" observedRunningTime="2025-11-25 08:34:29.563716732 +0000 UTC m=+1403.272747527" watchObservedRunningTime="2025-11-25 08:34:29.565912305 +0000 UTC m=+1403.274943100" Nov 25 08:34:29 crc kubenswrapper[4760]: I1125 08:34:29.934632 4760 scope.go:117] "RemoveContainer" containerID="ac49ef13ed406feecae306b7cfb175720518c51bb8559a5cd6106c5c3d32fa0a" Nov 25 08:34:31 crc kubenswrapper[4760]: I1125 08:34:31.746065 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:34:31 crc kubenswrapper[4760]: I1125 08:34:31.746127 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:34:34 crc kubenswrapper[4760]: I1125 08:34:34.753914 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:34 crc kubenswrapper[4760]: I1125 08:34:34.754212 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:34 crc kubenswrapper[4760]: I1125 08:34:34.836656 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:35 crc kubenswrapper[4760]: I1125 08:34:35.680057 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:35 crc kubenswrapper[4760]: I1125 08:34:35.736231 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62tsb"] Nov 25 08:34:37 crc kubenswrapper[4760]: I1125 08:34:37.626888 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-62tsb" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="registry-server" containerID="cri-o://6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841" gracePeriod=2 Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.041776 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.075975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-utilities\") pod \"1842e725-5a7e-4852-a62a-ea244cbe3f74\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.076528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-catalog-content\") pod \"1842e725-5a7e-4852-a62a-ea244cbe3f74\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.076667 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-479qj\" (UniqueName: \"kubernetes.io/projected/1842e725-5a7e-4852-a62a-ea244cbe3f74-kube-api-access-479qj\") pod \"1842e725-5a7e-4852-a62a-ea244cbe3f74\" (UID: \"1842e725-5a7e-4852-a62a-ea244cbe3f74\") " Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.077309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-utilities" (OuterVolumeSpecName: "utilities") pod "1842e725-5a7e-4852-a62a-ea244cbe3f74" (UID: "1842e725-5a7e-4852-a62a-ea244cbe3f74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.094598 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1842e725-5a7e-4852-a62a-ea244cbe3f74-kube-api-access-479qj" (OuterVolumeSpecName: "kube-api-access-479qj") pod "1842e725-5a7e-4852-a62a-ea244cbe3f74" (UID: "1842e725-5a7e-4852-a62a-ea244cbe3f74"). InnerVolumeSpecName "kube-api-access-479qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.126695 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1842e725-5a7e-4852-a62a-ea244cbe3f74" (UID: "1842e725-5a7e-4852-a62a-ea244cbe3f74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.179138 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.179176 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1842e725-5a7e-4852-a62a-ea244cbe3f74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.179193 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-479qj\" (UniqueName: \"kubernetes.io/projected/1842e725-5a7e-4852-a62a-ea244cbe3f74-kube-api-access-479qj\") on node \"crc\" DevicePath \"\"" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.637352 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62tsb" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.637363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerDied","Data":"6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841"} Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.637419 4760 scope.go:117] "RemoveContainer" containerID="6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.637406 4760 generic.go:334] "Generic (PLEG): container finished" podID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerID="6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841" exitCode=0 Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.637503 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62tsb" event={"ID":"1842e725-5a7e-4852-a62a-ea244cbe3f74","Type":"ContainerDied","Data":"216b0c31c9c1dcc15eef6e4021687801f949255dd0c9dc96fe2b40ced2949f67"} Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.678586 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62tsb"] Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.682862 4760 scope.go:117] "RemoveContainer" containerID="2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.689079 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-62tsb"] Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.709858 4760 scope.go:117] "RemoveContainer" containerID="44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.749987 4760 scope.go:117] "RemoveContainer" containerID="6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841" Nov 25 08:34:38 crc kubenswrapper[4760]: E1125 08:34:38.750502 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841\": container with ID starting with 6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841 not found: ID does not exist" containerID="6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.750567 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841"} err="failed to get container status \"6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841\": rpc error: code = NotFound desc = could not find container \"6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841\": container with ID starting with 6397a8b3109926e5bd6189b499a29b315df0255fd5e9e526f3ab5f2221438841 not found: ID does not exist" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.750603 4760 scope.go:117] "RemoveContainer" containerID="2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f" Nov 25 08:34:38 crc kubenswrapper[4760]: E1125 08:34:38.750965 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f\": container with ID starting with 2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f not found: ID does not exist" containerID="2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.751055 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f"} err="failed to get container status \"2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f\": rpc error: code = NotFound desc = could not find container \"2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f\": container with ID starting with 2043ca83d80b5c6270a293b203fbfc116c8f5474f2d04ee19272c6aad98b6e6f not found: ID does not exist" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.751120 4760 scope.go:117] "RemoveContainer" containerID="44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76" Nov 25 08:34:38 crc kubenswrapper[4760]: E1125 08:34:38.751543 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76\": container with ID starting with 44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76 not found: ID does not exist" containerID="44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.751588 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76"} err="failed to get container status \"44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76\": rpc error: code = NotFound desc = could not find container \"44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76\": container with ID starting with 44c8fd82a737334c79ad118ab75d104eb99be8b51aa4b479f9c27c56c910ac76 not found: ID does not exist" Nov 25 08:34:38 crc kubenswrapper[4760]: I1125 08:34:38.949225 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" path="/var/lib/kubelet/pods/1842e725-5a7e-4852-a62a-ea244cbe3f74/volumes" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.659456 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mqlsx"] Nov 25 08:34:51 crc kubenswrapper[4760]: E1125 08:34:51.660385 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="extract-content" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.660397 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="extract-content" Nov 25 08:34:51 crc kubenswrapper[4760]: E1125 08:34:51.660416 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="extract-utilities" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.660422 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="extract-utilities" Nov 25 08:34:51 crc kubenswrapper[4760]: E1125 08:34:51.660453 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="registry-server" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.660459 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="registry-server" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.660827 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1842e725-5a7e-4852-a62a-ea244cbe3f74" containerName="registry-server" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.662884 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.669915 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqlsx"] Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.726585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-utilities\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.726655 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzf4s\" (UniqueName: \"kubernetes.io/projected/3615d434-285c-41c7-a227-e80a1964dd5c-kube-api-access-bzf4s\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.726762 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-catalog-content\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.828770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-utilities\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.828866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzf4s\" (UniqueName: \"kubernetes.io/projected/3615d434-285c-41c7-a227-e80a1964dd5c-kube-api-access-bzf4s\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.828956 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-catalog-content\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.829488 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-utilities\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.829568 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-catalog-content\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:51 crc kubenswrapper[4760]: I1125 08:34:51.851355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzf4s\" (UniqueName: \"kubernetes.io/projected/3615d434-285c-41c7-a227-e80a1964dd5c-kube-api-access-bzf4s\") pod \"redhat-operators-mqlsx\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:52 crc kubenswrapper[4760]: I1125 08:34:52.050333 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:34:52 crc kubenswrapper[4760]: I1125 08:34:52.482074 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqlsx"] Nov 25 08:34:52 crc kubenswrapper[4760]: I1125 08:34:52.775179 4760 generic.go:334] "Generic (PLEG): container finished" podID="3615d434-285c-41c7-a227-e80a1964dd5c" containerID="95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623" exitCode=0 Nov 25 08:34:52 crc kubenswrapper[4760]: I1125 08:34:52.775226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqlsx" event={"ID":"3615d434-285c-41c7-a227-e80a1964dd5c","Type":"ContainerDied","Data":"95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623"} Nov 25 08:34:52 crc kubenswrapper[4760]: I1125 08:34:52.775264 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqlsx" event={"ID":"3615d434-285c-41c7-a227-e80a1964dd5c","Type":"ContainerStarted","Data":"cf2267cd65b63cfc02c6c7f8320b8e51743a14dca264163704980f745763b4ca"} Nov 25 08:34:53 crc kubenswrapper[4760]: I1125 08:34:53.785595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqlsx" event={"ID":"3615d434-285c-41c7-a227-e80a1964dd5c","Type":"ContainerStarted","Data":"196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736"} Nov 25 08:34:54 crc kubenswrapper[4760]: I1125 08:34:54.796523 4760 generic.go:334] "Generic (PLEG): container finished" podID="3615d434-285c-41c7-a227-e80a1964dd5c" containerID="196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736" exitCode=0 Nov 25 08:34:54 crc kubenswrapper[4760]: I1125 08:34:54.796574 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqlsx" event={"ID":"3615d434-285c-41c7-a227-e80a1964dd5c","Type":"ContainerDied","Data":"196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736"} Nov 25 08:34:55 crc kubenswrapper[4760]: I1125 08:34:55.811954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqlsx" event={"ID":"3615d434-285c-41c7-a227-e80a1964dd5c","Type":"ContainerStarted","Data":"54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971"} Nov 25 08:34:55 crc kubenswrapper[4760]: I1125 08:34:55.833903 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mqlsx" podStartSLOduration=2.133644842 podStartE2EDuration="4.833879536s" podCreationTimestamp="2025-11-25 08:34:51 +0000 UTC" firstStartedPulling="2025-11-25 08:34:52.776936001 +0000 UTC m=+1426.485966796" lastFinishedPulling="2025-11-25 08:34:55.477170695 +0000 UTC m=+1429.186201490" observedRunningTime="2025-11-25 08:34:55.826402753 +0000 UTC m=+1429.535433588" watchObservedRunningTime="2025-11-25 08:34:55.833879536 +0000 UTC m=+1429.542910331" Nov 25 08:35:01 crc kubenswrapper[4760]: I1125 08:35:01.746363 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:35:01 crc kubenswrapper[4760]: I1125 08:35:01.746882 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:35:02 crc kubenswrapper[4760]: I1125 08:35:02.051533 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:35:02 crc kubenswrapper[4760]: I1125 08:35:02.051581 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:35:02 crc kubenswrapper[4760]: I1125 08:35:02.107887 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:35:02 crc kubenswrapper[4760]: I1125 08:35:02.957837 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:35:03 crc kubenswrapper[4760]: I1125 08:35:03.009002 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqlsx"] Nov 25 08:35:04 crc kubenswrapper[4760]: I1125 08:35:04.890529 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mqlsx" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="registry-server" containerID="cri-o://54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971" gracePeriod=2 Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.386415 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.393028 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzf4s\" (UniqueName: \"kubernetes.io/projected/3615d434-285c-41c7-a227-e80a1964dd5c-kube-api-access-bzf4s\") pod \"3615d434-285c-41c7-a227-e80a1964dd5c\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.393167 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-catalog-content\") pod \"3615d434-285c-41c7-a227-e80a1964dd5c\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.393217 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-utilities\") pod \"3615d434-285c-41c7-a227-e80a1964dd5c\" (UID: \"3615d434-285c-41c7-a227-e80a1964dd5c\") " Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.394099 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-utilities" (OuterVolumeSpecName: "utilities") pod "3615d434-285c-41c7-a227-e80a1964dd5c" (UID: "3615d434-285c-41c7-a227-e80a1964dd5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.398284 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3615d434-285c-41c7-a227-e80a1964dd5c-kube-api-access-bzf4s" (OuterVolumeSpecName: "kube-api-access-bzf4s") pod "3615d434-285c-41c7-a227-e80a1964dd5c" (UID: "3615d434-285c-41c7-a227-e80a1964dd5c"). InnerVolumeSpecName "kube-api-access-bzf4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.486054 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3615d434-285c-41c7-a227-e80a1964dd5c" (UID: "3615d434-285c-41c7-a227-e80a1964dd5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.494815 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.494843 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3615d434-285c-41c7-a227-e80a1964dd5c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.494852 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzf4s\" (UniqueName: \"kubernetes.io/projected/3615d434-285c-41c7-a227-e80a1964dd5c-kube-api-access-bzf4s\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.904499 4760 generic.go:334] "Generic (PLEG): container finished" podID="3615d434-285c-41c7-a227-e80a1964dd5c" containerID="54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971" exitCode=0 Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.904553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqlsx" event={"ID":"3615d434-285c-41c7-a227-e80a1964dd5c","Type":"ContainerDied","Data":"54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971"} Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.904582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqlsx" event={"ID":"3615d434-285c-41c7-a227-e80a1964dd5c","Type":"ContainerDied","Data":"cf2267cd65b63cfc02c6c7f8320b8e51743a14dca264163704980f745763b4ca"} Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.904603 4760 scope.go:117] "RemoveContainer" containerID="54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.904750 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqlsx" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.936512 4760 scope.go:117] "RemoveContainer" containerID="196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736" Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.944396 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqlsx"] Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.954458 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mqlsx"] Nov 25 08:35:05 crc kubenswrapper[4760]: I1125 08:35:05.960756 4760 scope.go:117] "RemoveContainer" containerID="95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623" Nov 25 08:35:06 crc kubenswrapper[4760]: I1125 08:35:06.003857 4760 scope.go:117] "RemoveContainer" containerID="54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971" Nov 25 08:35:06 crc kubenswrapper[4760]: E1125 08:35:06.004382 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971\": container with ID starting with 54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971 not found: ID does not exist" containerID="54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971" Nov 25 08:35:06 crc kubenswrapper[4760]: I1125 08:35:06.004433 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971"} err="failed to get container status \"54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971\": rpc error: code = NotFound desc = could not find container \"54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971\": container with ID starting with 54112292e279fcfd91b292da32d0c14e954b05163e3297ca440dd5cbdd6f0971 not found: ID does not exist" Nov 25 08:35:06 crc kubenswrapper[4760]: I1125 08:35:06.004468 4760 scope.go:117] "RemoveContainer" containerID="196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736" Nov 25 08:35:06 crc kubenswrapper[4760]: E1125 08:35:06.004882 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736\": container with ID starting with 196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736 not found: ID does not exist" containerID="196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736" Nov 25 08:35:06 crc kubenswrapper[4760]: I1125 08:35:06.004935 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736"} err="failed to get container status \"196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736\": rpc error: code = NotFound desc = could not find container \"196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736\": container with ID starting with 196e54c9511e9d05cf515f97bfaeaa5e5a227d1321e7d680a813a8e3dc487736 not found: ID does not exist" Nov 25 08:35:06 crc kubenswrapper[4760]: I1125 08:35:06.004966 4760 scope.go:117] "RemoveContainer" containerID="95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623" Nov 25 08:35:06 crc kubenswrapper[4760]: E1125 08:35:06.005209 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623\": container with ID starting with 95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623 not found: ID does not exist" containerID="95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623" Nov 25 08:35:06 crc kubenswrapper[4760]: I1125 08:35:06.005273 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623"} err="failed to get container status \"95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623\": rpc error: code = NotFound desc = could not find container \"95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623\": container with ID starting with 95763483f10c9ea04e3cf0c4d7e8936b28924a2b72f84a6699824ef10e76b623 not found: ID does not exist" Nov 25 08:35:06 crc kubenswrapper[4760]: I1125 08:35:06.948728 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" path="/var/lib/kubelet/pods/3615d434-285c-41c7-a227-e80a1964dd5c/volumes" Nov 25 08:35:30 crc kubenswrapper[4760]: I1125 08:35:30.007773 4760 scope.go:117] "RemoveContainer" containerID="98cd0eb2943555555085d3ee8dd81577d5bbfc87745d36e244837ce8b55fbb67" Nov 25 08:35:30 crc kubenswrapper[4760]: I1125 08:35:30.034226 4760 scope.go:117] "RemoveContainer" containerID="5fd0c5be99f9ee7b58b378465e5bd85b87036fb49b2c56a5e008b1ccb68c0533" Nov 25 08:35:31 crc kubenswrapper[4760]: I1125 08:35:31.746577 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:35:31 crc kubenswrapper[4760]: I1125 08:35:31.746936 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:35:31 crc kubenswrapper[4760]: I1125 08:35:31.746983 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:35:31 crc kubenswrapper[4760]: I1125 08:35:31.747757 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca52788d396deaeb74b41a0b267f55e1f30d7a61af988b5f3d847e16dbb9f1b0"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:35:31 crc kubenswrapper[4760]: I1125 08:35:31.747816 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://ca52788d396deaeb74b41a0b267f55e1f30d7a61af988b5f3d847e16dbb9f1b0" gracePeriod=600 Nov 25 08:35:32 crc kubenswrapper[4760]: I1125 08:35:32.131287 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"ca52788d396deaeb74b41a0b267f55e1f30d7a61af988b5f3d847e16dbb9f1b0"} Nov 25 08:35:32 crc kubenswrapper[4760]: I1125 08:35:32.131844 4760 scope.go:117] "RemoveContainer" containerID="d0ea7124286527d9806dc0c775161bbfad1ddc74c136f4d8ca77bb8bd02e22cc" Nov 25 08:35:32 crc kubenswrapper[4760]: I1125 08:35:32.131299 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="ca52788d396deaeb74b41a0b267f55e1f30d7a61af988b5f3d847e16dbb9f1b0" exitCode=0 Nov 25 08:35:32 crc kubenswrapper[4760]: I1125 08:35:32.132059 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c"} Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.709974 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v9sgl"] Nov 25 08:35:44 crc kubenswrapper[4760]: E1125 08:35:44.711507 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="registry-server" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.711527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="registry-server" Nov 25 08:35:44 crc kubenswrapper[4760]: E1125 08:35:44.711553 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="extract-content" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.711563 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="extract-content" Nov 25 08:35:44 crc kubenswrapper[4760]: E1125 08:35:44.711607 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="extract-utilities" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.711615 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="extract-utilities" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.711839 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3615d434-285c-41c7-a227-e80a1964dd5c" containerName="registry-server" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.732424 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.745676 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9sgl"] Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.852141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bl8\" (UniqueName: \"kubernetes.io/projected/946d6e0c-3b30-4ab1-89e9-1702a5a70783-kube-api-access-c9bl8\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.852203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-utilities\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.852318 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-catalog-content\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.954639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bl8\" (UniqueName: \"kubernetes.io/projected/946d6e0c-3b30-4ab1-89e9-1702a5a70783-kube-api-access-c9bl8\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.954891 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-utilities\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.955046 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-catalog-content\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.955451 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-utilities\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.955588 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-catalog-content\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:44 crc kubenswrapper[4760]: I1125 08:35:44.979308 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bl8\" (UniqueName: \"kubernetes.io/projected/946d6e0c-3b30-4ab1-89e9-1702a5a70783-kube-api-access-c9bl8\") pod \"certified-operators-v9sgl\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:45 crc kubenswrapper[4760]: I1125 08:35:45.068219 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:45 crc kubenswrapper[4760]: I1125 08:35:45.571809 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9sgl"] Nov 25 08:35:46 crc kubenswrapper[4760]: I1125 08:35:46.276555 4760 generic.go:334] "Generic (PLEG): container finished" podID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerID="24694ec6c3e3a87754531dfd827cb53f3d04108acacab08912ed9cc5e99c6c2b" exitCode=0 Nov 25 08:35:46 crc kubenswrapper[4760]: I1125 08:35:46.276667 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9sgl" event={"ID":"946d6e0c-3b30-4ab1-89e9-1702a5a70783","Type":"ContainerDied","Data":"24694ec6c3e3a87754531dfd827cb53f3d04108acacab08912ed9cc5e99c6c2b"} Nov 25 08:35:46 crc kubenswrapper[4760]: I1125 08:35:46.276917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9sgl" event={"ID":"946d6e0c-3b30-4ab1-89e9-1702a5a70783","Type":"ContainerStarted","Data":"f87459528c9c1779cc2199fe5b042a294e9c350caca7177a0bcc8951682a566d"} Nov 25 08:35:47 crc kubenswrapper[4760]: I1125 08:35:47.288317 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9sgl" event={"ID":"946d6e0c-3b30-4ab1-89e9-1702a5a70783","Type":"ContainerStarted","Data":"819ceb1413ead8636cf58ecb998a163d261f2b039808c13ffdfbf1936996b38d"} Nov 25 08:35:48 crc kubenswrapper[4760]: I1125 08:35:48.298151 4760 generic.go:334] "Generic (PLEG): container finished" podID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerID="819ceb1413ead8636cf58ecb998a163d261f2b039808c13ffdfbf1936996b38d" exitCode=0 Nov 25 08:35:48 crc kubenswrapper[4760]: I1125 08:35:48.298256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9sgl" event={"ID":"946d6e0c-3b30-4ab1-89e9-1702a5a70783","Type":"ContainerDied","Data":"819ceb1413ead8636cf58ecb998a163d261f2b039808c13ffdfbf1936996b38d"} Nov 25 08:35:49 crc kubenswrapper[4760]: I1125 08:35:49.308137 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9sgl" event={"ID":"946d6e0c-3b30-4ab1-89e9-1702a5a70783","Type":"ContainerStarted","Data":"0cb078cea89e0dbf470799e210aab36da818c8ae6de165e3a125915bf5b8ad49"} Nov 25 08:35:49 crc kubenswrapper[4760]: I1125 08:35:49.347854 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v9sgl" podStartSLOduration=2.73348483 podStartE2EDuration="5.347837947s" podCreationTimestamp="2025-11-25 08:35:44 +0000 UTC" firstStartedPulling="2025-11-25 08:35:46.278240093 +0000 UTC m=+1479.987270888" lastFinishedPulling="2025-11-25 08:35:48.89259321 +0000 UTC m=+1482.601624005" observedRunningTime="2025-11-25 08:35:49.33739017 +0000 UTC m=+1483.046420975" watchObservedRunningTime="2025-11-25 08:35:49.347837947 +0000 UTC m=+1483.056868742" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.708448 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c48j4"] Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.711469 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.739526 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c48j4"] Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.889484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zb6f\" (UniqueName: \"kubernetes.io/projected/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-kube-api-access-9zb6f\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.889636 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-utilities\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.889764 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-catalog-content\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.991317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-catalog-content\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.991389 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zb6f\" (UniqueName: \"kubernetes.io/projected/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-kube-api-access-9zb6f\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.991464 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-utilities\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.991960 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-catalog-content\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:51 crc kubenswrapper[4760]: I1125 08:35:51.991960 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-utilities\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:52 crc kubenswrapper[4760]: I1125 08:35:52.020824 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zb6f\" (UniqueName: \"kubernetes.io/projected/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-kube-api-access-9zb6f\") pod \"redhat-marketplace-c48j4\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:52 crc kubenswrapper[4760]: I1125 08:35:52.051203 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:35:52 crc kubenswrapper[4760]: I1125 08:35:52.580219 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c48j4"] Nov 25 08:35:52 crc kubenswrapper[4760]: W1125 08:35:52.588457 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb976fdc7_f5ae_4f60_afab_1a1590cb0b08.slice/crio-02048107a948cbe133d28b6471275bc52f720c1415e659c396c33ff022f20d7c WatchSource:0}: Error finding container 02048107a948cbe133d28b6471275bc52f720c1415e659c396c33ff022f20d7c: Status 404 returned error can't find the container with id 02048107a948cbe133d28b6471275bc52f720c1415e659c396c33ff022f20d7c Nov 25 08:35:53 crc kubenswrapper[4760]: I1125 08:35:53.347542 4760 generic.go:334] "Generic (PLEG): container finished" podID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerID="c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9" exitCode=0 Nov 25 08:35:53 crc kubenswrapper[4760]: I1125 08:35:53.347602 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c48j4" event={"ID":"b976fdc7-f5ae-4f60-afab-1a1590cb0b08","Type":"ContainerDied","Data":"c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9"} Nov 25 08:35:53 crc kubenswrapper[4760]: I1125 08:35:53.347636 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c48j4" event={"ID":"b976fdc7-f5ae-4f60-afab-1a1590cb0b08","Type":"ContainerStarted","Data":"02048107a948cbe133d28b6471275bc52f720c1415e659c396c33ff022f20d7c"} Nov 25 08:35:54 crc kubenswrapper[4760]: I1125 08:35:54.359785 4760 generic.go:334] "Generic (PLEG): container finished" podID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerID="b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9" exitCode=0 Nov 25 08:35:54 crc kubenswrapper[4760]: I1125 08:35:54.359837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c48j4" event={"ID":"b976fdc7-f5ae-4f60-afab-1a1590cb0b08","Type":"ContainerDied","Data":"b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9"} Nov 25 08:35:55 crc kubenswrapper[4760]: I1125 08:35:55.068707 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:55 crc kubenswrapper[4760]: I1125 08:35:55.069018 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:55 crc kubenswrapper[4760]: I1125 08:35:55.112740 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:55 crc kubenswrapper[4760]: I1125 08:35:55.371557 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c48j4" event={"ID":"b976fdc7-f5ae-4f60-afab-1a1590cb0b08","Type":"ContainerStarted","Data":"366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85"} Nov 25 08:35:55 crc kubenswrapper[4760]: I1125 08:35:55.398672 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c48j4" podStartSLOduration=2.890552294 podStartE2EDuration="4.398651301s" podCreationTimestamp="2025-11-25 08:35:51 +0000 UTC" firstStartedPulling="2025-11-25 08:35:53.350943933 +0000 UTC m=+1487.059974728" lastFinishedPulling="2025-11-25 08:35:54.85904294 +0000 UTC m=+1488.568073735" observedRunningTime="2025-11-25 08:35:55.388158432 +0000 UTC m=+1489.097189227" watchObservedRunningTime="2025-11-25 08:35:55.398651301 +0000 UTC m=+1489.107682096" Nov 25 08:35:55 crc kubenswrapper[4760]: I1125 08:35:55.419199 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:57 crc kubenswrapper[4760]: I1125 08:35:57.490995 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9sgl"] Nov 25 08:35:57 crc kubenswrapper[4760]: I1125 08:35:57.491753 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v9sgl" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="registry-server" containerID="cri-o://0cb078cea89e0dbf470799e210aab36da818c8ae6de165e3a125915bf5b8ad49" gracePeriod=2 Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.417405 4760 generic.go:334] "Generic (PLEG): container finished" podID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerID="0cb078cea89e0dbf470799e210aab36da818c8ae6de165e3a125915bf5b8ad49" exitCode=0 Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.417557 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9sgl" event={"ID":"946d6e0c-3b30-4ab1-89e9-1702a5a70783","Type":"ContainerDied","Data":"0cb078cea89e0dbf470799e210aab36da818c8ae6de165e3a125915bf5b8ad49"} Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.525529 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.619677 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-utilities\") pod \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.619832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-catalog-content\") pod \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.620005 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bl8\" (UniqueName: \"kubernetes.io/projected/946d6e0c-3b30-4ab1-89e9-1702a5a70783-kube-api-access-c9bl8\") pod \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\" (UID: \"946d6e0c-3b30-4ab1-89e9-1702a5a70783\") " Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.621781 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-utilities" (OuterVolumeSpecName: "utilities") pod "946d6e0c-3b30-4ab1-89e9-1702a5a70783" (UID: "946d6e0c-3b30-4ab1-89e9-1702a5a70783"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.628730 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946d6e0c-3b30-4ab1-89e9-1702a5a70783-kube-api-access-c9bl8" (OuterVolumeSpecName: "kube-api-access-c9bl8") pod "946d6e0c-3b30-4ab1-89e9-1702a5a70783" (UID: "946d6e0c-3b30-4ab1-89e9-1702a5a70783"). InnerVolumeSpecName "kube-api-access-c9bl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.688540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "946d6e0c-3b30-4ab1-89e9-1702a5a70783" (UID: "946d6e0c-3b30-4ab1-89e9-1702a5a70783"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.722220 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bl8\" (UniqueName: \"kubernetes.io/projected/946d6e0c-3b30-4ab1-89e9-1702a5a70783-kube-api-access-c9bl8\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.722287 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:58 crc kubenswrapper[4760]: I1125 08:35:58.722304 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946d6e0c-3b30-4ab1-89e9-1702a5a70783-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:59 crc kubenswrapper[4760]: I1125 08:35:59.428323 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9sgl" event={"ID":"946d6e0c-3b30-4ab1-89e9-1702a5a70783","Type":"ContainerDied","Data":"f87459528c9c1779cc2199fe5b042a294e9c350caca7177a0bcc8951682a566d"} Nov 25 08:35:59 crc kubenswrapper[4760]: I1125 08:35:59.428380 4760 scope.go:117] "RemoveContainer" containerID="0cb078cea89e0dbf470799e210aab36da818c8ae6de165e3a125915bf5b8ad49" Nov 25 08:35:59 crc kubenswrapper[4760]: I1125 08:35:59.428423 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9sgl" Nov 25 08:35:59 crc kubenswrapper[4760]: I1125 08:35:59.452094 4760 scope.go:117] "RemoveContainer" containerID="819ceb1413ead8636cf58ecb998a163d261f2b039808c13ffdfbf1936996b38d" Nov 25 08:35:59 crc kubenswrapper[4760]: I1125 08:35:59.457318 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9sgl"] Nov 25 08:35:59 crc kubenswrapper[4760]: I1125 08:35:59.467398 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v9sgl"] Nov 25 08:35:59 crc kubenswrapper[4760]: I1125 08:35:59.479552 4760 scope.go:117] "RemoveContainer" containerID="24694ec6c3e3a87754531dfd827cb53f3d04108acacab08912ed9cc5e99c6c2b" Nov 25 08:36:00 crc kubenswrapper[4760]: I1125 08:36:00.949054 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" path="/var/lib/kubelet/pods/946d6e0c-3b30-4ab1-89e9-1702a5a70783/volumes" Nov 25 08:36:02 crc kubenswrapper[4760]: I1125 08:36:02.052382 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:36:02 crc kubenswrapper[4760]: I1125 08:36:02.052462 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:36:02 crc kubenswrapper[4760]: I1125 08:36:02.110075 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:36:02 crc kubenswrapper[4760]: I1125 08:36:02.517177 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:36:03 crc kubenswrapper[4760]: I1125 08:36:03.503962 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c48j4"] Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.482727 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c48j4" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="registry-server" containerID="cri-o://366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85" gracePeriod=2 Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.902380 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.949689 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-utilities\") pod \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.949740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zb6f\" (UniqueName: \"kubernetes.io/projected/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-kube-api-access-9zb6f\") pod \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.949931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-catalog-content\") pod \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\" (UID: \"b976fdc7-f5ae-4f60-afab-1a1590cb0b08\") " Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.950512 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-utilities" (OuterVolumeSpecName: "utilities") pod "b976fdc7-f5ae-4f60-afab-1a1590cb0b08" (UID: "b976fdc7-f5ae-4f60-afab-1a1590cb0b08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.956080 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-kube-api-access-9zb6f" (OuterVolumeSpecName: "kube-api-access-9zb6f") pod "b976fdc7-f5ae-4f60-afab-1a1590cb0b08" (UID: "b976fdc7-f5ae-4f60-afab-1a1590cb0b08"). InnerVolumeSpecName "kube-api-access-9zb6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:36:05 crc kubenswrapper[4760]: I1125 08:36:05.967404 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b976fdc7-f5ae-4f60-afab-1a1590cb0b08" (UID: "b976fdc7-f5ae-4f60-afab-1a1590cb0b08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.052372 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.052417 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.052430 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zb6f\" (UniqueName: \"kubernetes.io/projected/b976fdc7-f5ae-4f60-afab-1a1590cb0b08-kube-api-access-9zb6f\") on node \"crc\" DevicePath \"\"" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.496163 4760 generic.go:334] "Generic (PLEG): container finished" podID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerID="366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85" exitCode=0 Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.496269 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c48j4" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.497577 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c48j4" event={"ID":"b976fdc7-f5ae-4f60-afab-1a1590cb0b08","Type":"ContainerDied","Data":"366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85"} Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.497740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c48j4" event={"ID":"b976fdc7-f5ae-4f60-afab-1a1590cb0b08","Type":"ContainerDied","Data":"02048107a948cbe133d28b6471275bc52f720c1415e659c396c33ff022f20d7c"} Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.497794 4760 scope.go:117] "RemoveContainer" containerID="366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.522503 4760 scope.go:117] "RemoveContainer" containerID="b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.536649 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c48j4"] Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.544631 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c48j4"] Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.557433 4760 scope.go:117] "RemoveContainer" containerID="c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.595981 4760 scope.go:117] "RemoveContainer" containerID="366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85" Nov 25 08:36:06 crc kubenswrapper[4760]: E1125 08:36:06.597429 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85\": container with ID starting with 366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85 not found: ID does not exist" containerID="366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.597501 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85"} err="failed to get container status \"366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85\": rpc error: code = NotFound desc = could not find container \"366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85\": container with ID starting with 366b8e4a17f3de6bdaa8c7fb7df7f3c2118ed439bf9352de6c72b71db94bbc85 not found: ID does not exist" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.597548 4760 scope.go:117] "RemoveContainer" containerID="b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9" Nov 25 08:36:06 crc kubenswrapper[4760]: E1125 08:36:06.598145 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9\": container with ID starting with b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9 not found: ID does not exist" containerID="b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.598195 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9"} err="failed to get container status \"b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9\": rpc error: code = NotFound desc = could not find container \"b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9\": container with ID starting with b7f711a942c6d17b3ea1b23b7ebc69575b2fda38f2f2b2c0829e6e75eb1e69c9 not found: ID does not exist" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.598237 4760 scope.go:117] "RemoveContainer" containerID="c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9" Nov 25 08:36:06 crc kubenswrapper[4760]: E1125 08:36:06.598686 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9\": container with ID starting with c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9 not found: ID does not exist" containerID="c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.598723 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9"} err="failed to get container status \"c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9\": rpc error: code = NotFound desc = could not find container \"c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9\": container with ID starting with c53cc2c376cfdd242771d0c2e5ee77b1b84a6953dda849746620b392dce6e7c9 not found: ID does not exist" Nov 25 08:36:06 crc kubenswrapper[4760]: I1125 08:36:06.957841 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" path="/var/lib/kubelet/pods/b976fdc7-f5ae-4f60-afab-1a1590cb0b08/volumes" Nov 25 08:36:17 crc kubenswrapper[4760]: I1125 08:36:17.620633 4760 generic.go:334] "Generic (PLEG): container finished" podID="58fc7a0f-f6c7-4604-94f1-7af9fe6439de" containerID="f232b828eb2d62e89694e9349d89ffbb63d2a688639461757ce281ea370a3b96" exitCode=0 Nov 25 08:36:17 crc kubenswrapper[4760]: I1125 08:36:17.620782 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" event={"ID":"58fc7a0f-f6c7-4604-94f1-7af9fe6439de","Type":"ContainerDied","Data":"f232b828eb2d62e89694e9349d89ffbb63d2a688639461757ce281ea370a3b96"} Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.073661 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.273642 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-inventory\") pod \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.273690 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-bootstrap-combined-ca-bundle\") pod \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.273770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8j9k\" (UniqueName: \"kubernetes.io/projected/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-kube-api-access-g8j9k\") pod \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.274056 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-ssh-key\") pod \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\" (UID: \"58fc7a0f-f6c7-4604-94f1-7af9fe6439de\") " Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.281371 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-kube-api-access-g8j9k" (OuterVolumeSpecName: "kube-api-access-g8j9k") pod "58fc7a0f-f6c7-4604-94f1-7af9fe6439de" (UID: "58fc7a0f-f6c7-4604-94f1-7af9fe6439de"). InnerVolumeSpecName "kube-api-access-g8j9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.281981 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "58fc7a0f-f6c7-4604-94f1-7af9fe6439de" (UID: "58fc7a0f-f6c7-4604-94f1-7af9fe6439de"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.304424 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "58fc7a0f-f6c7-4604-94f1-7af9fe6439de" (UID: "58fc7a0f-f6c7-4604-94f1-7af9fe6439de"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.325504 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-inventory" (OuterVolumeSpecName: "inventory") pod "58fc7a0f-f6c7-4604-94f1-7af9fe6439de" (UID: "58fc7a0f-f6c7-4604-94f1-7af9fe6439de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.377595 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8j9k\" (UniqueName: \"kubernetes.io/projected/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-kube-api-access-g8j9k\") on node \"crc\" DevicePath \"\"" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.377645 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.377705 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.377726 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58fc7a0f-f6c7-4604-94f1-7af9fe6439de-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.642498 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.642483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r" event={"ID":"58fc7a0f-f6c7-4604-94f1-7af9fe6439de","Type":"ContainerDied","Data":"a4955bca80656526d903279fcdddcf7ecb1acee4a176a71b523cb61a1d78544a"} Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.642666 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4955bca80656526d903279fcdddcf7ecb1acee4a176a71b523cb61a1d78544a" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.723570 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67"] Nov 25 08:36:19 crc kubenswrapper[4760]: E1125 08:36:19.724015 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="extract-content" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724035 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="extract-content" Nov 25 08:36:19 crc kubenswrapper[4760]: E1125 08:36:19.724050 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58fc7a0f-f6c7-4604-94f1-7af9fe6439de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724063 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="58fc7a0f-f6c7-4604-94f1-7af9fe6439de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 08:36:19 crc kubenswrapper[4760]: E1125 08:36:19.724082 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="registry-server" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724090 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="registry-server" Nov 25 08:36:19 crc kubenswrapper[4760]: E1125 08:36:19.724109 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="extract-content" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724117 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="extract-content" Nov 25 08:36:19 crc kubenswrapper[4760]: E1125 08:36:19.724130 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="extract-utilities" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724140 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="extract-utilities" Nov 25 08:36:19 crc kubenswrapper[4760]: E1125 08:36:19.724160 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="extract-utilities" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724168 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="extract-utilities" Nov 25 08:36:19 crc kubenswrapper[4760]: E1125 08:36:19.724193 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="registry-server" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724202 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="registry-server" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724487 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b976fdc7-f5ae-4f60-afab-1a1590cb0b08" containerName="registry-server" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724528 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="946d6e0c-3b30-4ab1-89e9-1702a5a70783" containerName="registry-server" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.724548 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="58fc7a0f-f6c7-4604-94f1-7af9fe6439de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.725468 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.727640 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.730131 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.731900 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.739862 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.742575 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67"] Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.784739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.785094 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj8tm\" (UniqueName: \"kubernetes.io/projected/55c815e4-e305-41af-9739-5d60e5750c12-kube-api-access-bj8tm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.785136 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.886915 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.887140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj8tm\" (UniqueName: \"kubernetes.io/projected/55c815e4-e305-41af-9739-5d60e5750c12-kube-api-access-bj8tm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.887185 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.891162 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.893737 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:19 crc kubenswrapper[4760]: I1125 08:36:19.905868 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj8tm\" (UniqueName: \"kubernetes.io/projected/55c815e4-e305-41af-9739-5d60e5750c12-kube-api-access-bj8tm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sjn67\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:20 crc kubenswrapper[4760]: I1125 08:36:20.050889 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:36:20 crc kubenswrapper[4760]: I1125 08:36:20.640607 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67"] Nov 25 08:36:20 crc kubenswrapper[4760]: I1125 08:36:20.654008 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" event={"ID":"55c815e4-e305-41af-9739-5d60e5750c12","Type":"ContainerStarted","Data":"652d1f6bad1b45852956041f43f54449d9cf138a84a3fd31687c023647fdb4f9"} Nov 25 08:36:21 crc kubenswrapper[4760]: I1125 08:36:21.682084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" event={"ID":"55c815e4-e305-41af-9739-5d60e5750c12","Type":"ContainerStarted","Data":"6c16aa8f71d4e7b7aa37c569cf465ddc6357ae10ed3e5290a26fb5f73bdc5226"} Nov 25 08:36:21 crc kubenswrapper[4760]: I1125 08:36:21.707586 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" podStartSLOduration=2.249044939 podStartE2EDuration="2.707567429s" podCreationTimestamp="2025-11-25 08:36:19 +0000 UTC" firstStartedPulling="2025-11-25 08:36:20.640817824 +0000 UTC m=+1514.349848619" lastFinishedPulling="2025-11-25 08:36:21.099340294 +0000 UTC m=+1514.808371109" observedRunningTime="2025-11-25 08:36:21.706471648 +0000 UTC m=+1515.415502483" watchObservedRunningTime="2025-11-25 08:36:21.707567429 +0000 UTC m=+1515.416598224" Nov 25 08:36:30 crc kubenswrapper[4760]: I1125 08:36:30.120785 4760 scope.go:117] "RemoveContainer" containerID="91bbd968de891dd2c3721d8043ea159565fda7da0c970a5aec82886f9b908206" Nov 25 08:36:30 crc kubenswrapper[4760]: I1125 08:36:30.150433 4760 scope.go:117] "RemoveContainer" containerID="8713dae99663bc6d5635b5873d189fc8ab82b435b748850967d31591a558cb0a" Nov 25 08:37:10 crc kubenswrapper[4760]: I1125 08:37:10.044759 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4ln57"] Nov 25 08:37:10 crc kubenswrapper[4760]: I1125 08:37:10.054195 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e5f4-account-create-lz7hl"] Nov 25 08:37:10 crc kubenswrapper[4760]: I1125 08:37:10.065690 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4ln57"] Nov 25 08:37:10 crc kubenswrapper[4760]: I1125 08:37:10.075481 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e5f4-account-create-lz7hl"] Nov 25 08:37:10 crc kubenswrapper[4760]: I1125 08:37:10.952093 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29782cbf-c176-4549-95ca-9a4c6c439459" path="/var/lib/kubelet/pods/29782cbf-c176-4549-95ca-9a4c6c439459/volumes" Nov 25 08:37:10 crc kubenswrapper[4760]: I1125 08:37:10.952682 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ac8fd8-1d1b-415e-963e-ad2242769cad" path="/var/lib/kubelet/pods/c9ac8fd8-1d1b-415e-963e-ad2242769cad/volumes" Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.036914 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-885wz"] Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.047412 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fe7b-account-create-dwdsg"] Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.057431 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vbtwz"] Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.066057 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vbtwz"] Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.073043 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fe7b-account-create-dwdsg"] Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.079712 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-885wz"] Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.951320 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c95ebd-5709-47f6-b0cc-518622250437" path="/var/lib/kubelet/pods/a0c95ebd-5709-47f6-b0cc-518622250437/volumes" Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.952639 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0361740-20d2-4735-9d93-a5d2fe88b1e1" path="/var/lib/kubelet/pods/b0361740-20d2-4735-9d93-a5d2fe88b1e1/volumes" Nov 25 08:37:14 crc kubenswrapper[4760]: I1125 08:37:14.953226 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b034ea-33e2-47fa-beb9-5c05687bc805" path="/var/lib/kubelet/pods/c3b034ea-33e2-47fa-beb9-5c05687bc805/volumes" Nov 25 08:37:15 crc kubenswrapper[4760]: I1125 08:37:15.030103 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-56e3-account-create-kn9lx"] Nov 25 08:37:15 crc kubenswrapper[4760]: I1125 08:37:15.038437 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-56e3-account-create-kn9lx"] Nov 25 08:37:16 crc kubenswrapper[4760]: I1125 08:37:16.957103 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1c9c96-0d40-4b97-9463-85050a4b7bc8" path="/var/lib/kubelet/pods/ea1c9c96-0d40-4b97-9463-85050a4b7bc8/volumes" Nov 25 08:37:30 crc kubenswrapper[4760]: I1125 08:37:30.265708 4760 scope.go:117] "RemoveContainer" containerID="af1728b30f037c403bc5a54c88812af04bcbcc79d508fe6e072a4a73dcc810ea" Nov 25 08:37:30 crc kubenswrapper[4760]: I1125 08:37:30.319726 4760 scope.go:117] "RemoveContainer" containerID="eab4a6959d031f2ed45f482e3d73d3251d2dd94880fe363dc6ee4b683e11032d" Nov 25 08:37:30 crc kubenswrapper[4760]: I1125 08:37:30.371330 4760 scope.go:117] "RemoveContainer" containerID="f17ba7504c2fa48b7b56d9003e8c6e845b519fe9a5f05510b2c5eafd50289a7b" Nov 25 08:37:30 crc kubenswrapper[4760]: I1125 08:37:30.402390 4760 scope.go:117] "RemoveContainer" containerID="d8be27c651b5994c2fb2ca53c6513db17d61105e2a40d6e59d21fd82a2d4592c" Nov 25 08:37:30 crc kubenswrapper[4760]: I1125 08:37:30.454828 4760 scope.go:117] "RemoveContainer" containerID="cd265a4299e6f7eb5312d11ee89471e41eafd91f356d346e2b8bd9d2cec99ea1" Nov 25 08:37:30 crc kubenswrapper[4760]: I1125 08:37:30.515115 4760 scope.go:117] "RemoveContainer" containerID="cfe1cffae2b7612e2e384fee5b08a2b1be8bb1ae86211f44c9f3c3ec12f18af8" Nov 25 08:37:32 crc kubenswrapper[4760]: I1125 08:37:32.062151 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7slrm"] Nov 25 08:37:32 crc kubenswrapper[4760]: I1125 08:37:32.074391 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7slrm"] Nov 25 08:37:32 crc kubenswrapper[4760]: I1125 08:37:32.469078 4760 generic.go:334] "Generic (PLEG): container finished" podID="55c815e4-e305-41af-9739-5d60e5750c12" containerID="6c16aa8f71d4e7b7aa37c569cf465ddc6357ae10ed3e5290a26fb5f73bdc5226" exitCode=0 Nov 25 08:37:32 crc kubenswrapper[4760]: I1125 08:37:32.469131 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" event={"ID":"55c815e4-e305-41af-9739-5d60e5750c12","Type":"ContainerDied","Data":"6c16aa8f71d4e7b7aa37c569cf465ddc6357ae10ed3e5290a26fb5f73bdc5226"} Nov 25 08:37:32 crc kubenswrapper[4760]: I1125 08:37:32.949116 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098e59d2-c893-4917-b18b-d0ba993a45c5" path="/var/lib/kubelet/pods/098e59d2-c893-4917-b18b-d0ba993a45c5/volumes" Nov 25 08:37:33 crc kubenswrapper[4760]: I1125 08:37:33.870037 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:37:33 crc kubenswrapper[4760]: I1125 08:37:33.989796 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-ssh-key\") pod \"55c815e4-e305-41af-9739-5d60e5750c12\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " Nov 25 08:37:33 crc kubenswrapper[4760]: I1125 08:37:33.990040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj8tm\" (UniqueName: \"kubernetes.io/projected/55c815e4-e305-41af-9739-5d60e5750c12-kube-api-access-bj8tm\") pod \"55c815e4-e305-41af-9739-5d60e5750c12\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " Nov 25 08:37:33 crc kubenswrapper[4760]: I1125 08:37:33.990096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-inventory\") pod \"55c815e4-e305-41af-9739-5d60e5750c12\" (UID: \"55c815e4-e305-41af-9739-5d60e5750c12\") " Nov 25 08:37:33 crc kubenswrapper[4760]: I1125 08:37:33.997898 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c815e4-e305-41af-9739-5d60e5750c12-kube-api-access-bj8tm" (OuterVolumeSpecName: "kube-api-access-bj8tm") pod "55c815e4-e305-41af-9739-5d60e5750c12" (UID: "55c815e4-e305-41af-9739-5d60e5750c12"). InnerVolumeSpecName "kube-api-access-bj8tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.024469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "55c815e4-e305-41af-9739-5d60e5750c12" (UID: "55c815e4-e305-41af-9739-5d60e5750c12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.025468 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-inventory" (OuterVolumeSpecName: "inventory") pod "55c815e4-e305-41af-9739-5d60e5750c12" (UID: "55c815e4-e305-41af-9739-5d60e5750c12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.092501 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj8tm\" (UniqueName: \"kubernetes.io/projected/55c815e4-e305-41af-9739-5d60e5750c12-kube-api-access-bj8tm\") on node \"crc\" DevicePath \"\"" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.092817 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.092828 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55c815e4-e305-41af-9739-5d60e5750c12-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.493052 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" event={"ID":"55c815e4-e305-41af-9739-5d60e5750c12","Type":"ContainerDied","Data":"652d1f6bad1b45852956041f43f54449d9cf138a84a3fd31687c023647fdb4f9"} Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.493431 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="652d1f6bad1b45852956041f43f54449d9cf138a84a3fd31687c023647fdb4f9" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.493150 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.578018 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n"] Nov 25 08:37:34 crc kubenswrapper[4760]: E1125 08:37:34.578485 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c815e4-e305-41af-9739-5d60e5750c12" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.578510 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c815e4-e305-41af-9739-5d60e5750c12" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.578738 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c815e4-e305-41af-9739-5d60e5750c12" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.579538 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.585112 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.585159 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.585326 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.585648 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.595463 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n"] Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.708612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqrsv\" (UniqueName: \"kubernetes.io/projected/09633aa4-95d9-4047-b7e0-e6c90f58845c-kube-api-access-kqrsv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.708672 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.708804 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.811827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.811994 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqrsv\" (UniqueName: \"kubernetes.io/projected/09633aa4-95d9-4047-b7e0-e6c90f58845c-kube-api-access-kqrsv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.812034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.817521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.819111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.837388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqrsv\" (UniqueName: \"kubernetes.io/projected/09633aa4-95d9-4047-b7e0-e6c90f58845c-kube-api-access-kqrsv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:34 crc kubenswrapper[4760]: I1125 08:37:34.905952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:35 crc kubenswrapper[4760]: I1125 08:37:35.416141 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n"] Nov 25 08:37:35 crc kubenswrapper[4760]: I1125 08:37:35.501854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" event={"ID":"09633aa4-95d9-4047-b7e0-e6c90f58845c","Type":"ContainerStarted","Data":"71acf9b92aa9d1db45b512d6174d6faf8ee4a4d847c793a5715d1ced72b81793"} Nov 25 08:37:36 crc kubenswrapper[4760]: I1125 08:37:36.512728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" event={"ID":"09633aa4-95d9-4047-b7e0-e6c90f58845c","Type":"ContainerStarted","Data":"849bf1b9368102117ef5e6f33c74681d2d35a8e411e8aed23ec46c15b1094ad0"} Nov 25 08:37:41 crc kubenswrapper[4760]: I1125 08:37:41.577544 4760 generic.go:334] "Generic (PLEG): container finished" podID="09633aa4-95d9-4047-b7e0-e6c90f58845c" containerID="849bf1b9368102117ef5e6f33c74681d2d35a8e411e8aed23ec46c15b1094ad0" exitCode=0 Nov 25 08:37:41 crc kubenswrapper[4760]: I1125 08:37:41.577671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" event={"ID":"09633aa4-95d9-4047-b7e0-e6c90f58845c","Type":"ContainerDied","Data":"849bf1b9368102117ef5e6f33c74681d2d35a8e411e8aed23ec46c15b1094ad0"} Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.044333 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-49bc-account-create-9f8xp"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.071608 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jqz7h"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.083110 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xwpz5"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.096262 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.101626 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-49bc-account-create-9f8xp"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.113780 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xwpz5"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.121763 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jqz7h"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.128656 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hllg2"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.135875 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-264d-account-create-bgw6r"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.141920 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-264d-account-create-bgw6r"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.148725 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hllg2"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.154926 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0309-account-create-vmfzr"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.162347 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0309-account-create-vmfzr"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.290446 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqrsv\" (UniqueName: \"kubernetes.io/projected/09633aa4-95d9-4047-b7e0-e6c90f58845c-kube-api-access-kqrsv\") pod \"09633aa4-95d9-4047-b7e0-e6c90f58845c\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.290528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-inventory\") pod \"09633aa4-95d9-4047-b7e0-e6c90f58845c\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.290623 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-ssh-key\") pod \"09633aa4-95d9-4047-b7e0-e6c90f58845c\" (UID: \"09633aa4-95d9-4047-b7e0-e6c90f58845c\") " Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.296038 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09633aa4-95d9-4047-b7e0-e6c90f58845c-kube-api-access-kqrsv" (OuterVolumeSpecName: "kube-api-access-kqrsv") pod "09633aa4-95d9-4047-b7e0-e6c90f58845c" (UID: "09633aa4-95d9-4047-b7e0-e6c90f58845c"). InnerVolumeSpecName "kube-api-access-kqrsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.316837 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09633aa4-95d9-4047-b7e0-e6c90f58845c" (UID: "09633aa4-95d9-4047-b7e0-e6c90f58845c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.320628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-inventory" (OuterVolumeSpecName: "inventory") pod "09633aa4-95d9-4047-b7e0-e6c90f58845c" (UID: "09633aa4-95d9-4047-b7e0-e6c90f58845c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.397233 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.397297 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09633aa4-95d9-4047-b7e0-e6c90f58845c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.397311 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqrsv\" (UniqueName: \"kubernetes.io/projected/09633aa4-95d9-4047-b7e0-e6c90f58845c-kube-api-access-kqrsv\") on node \"crc\" DevicePath \"\"" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.602852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" event={"ID":"09633aa4-95d9-4047-b7e0-e6c90f58845c","Type":"ContainerDied","Data":"71acf9b92aa9d1db45b512d6174d6faf8ee4a4d847c793a5715d1ced72b81793"} Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.602896 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71acf9b92aa9d1db45b512d6174d6faf8ee4a4d847c793a5715d1ced72b81793" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.602939 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.701842 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52"] Nov 25 08:37:43 crc kubenswrapper[4760]: E1125 08:37:43.702438 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09633aa4-95d9-4047-b7e0-e6c90f58845c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.702469 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="09633aa4-95d9-4047-b7e0-e6c90f58845c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.702720 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="09633aa4-95d9-4047-b7e0-e6c90f58845c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.703590 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.705689 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.706444 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.706928 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.708233 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.717807 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52"] Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.906884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsstt\" (UniqueName: \"kubernetes.io/projected/e0f1315a-0771-4e60-995c-423c3b5e977a-kube-api-access-gsstt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.906953 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:43 crc kubenswrapper[4760]: I1125 08:37:43.907288 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.009708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsstt\" (UniqueName: \"kubernetes.io/projected/e0f1315a-0771-4e60-995c-423c3b5e977a-kube-api-access-gsstt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.009776 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.009845 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.020924 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.022567 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.039022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsstt\" (UniqueName: \"kubernetes.io/projected/e0f1315a-0771-4e60-995c-423c3b5e977a-kube-api-access-gsstt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mpp52\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.327742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.884037 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52"] Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.893122 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.955012 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04926c37-45b4-4ecf-82ff-9613687bb30d" path="/var/lib/kubelet/pods/04926c37-45b4-4ecf-82ff-9613687bb30d/volumes" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.955882 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2097afb5-f032-45c6-a7d4-52b45731db00" path="/var/lib/kubelet/pods/2097afb5-f032-45c6-a7d4-52b45731db00/volumes" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.956617 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4396ce90-2b59-4cba-af25-9121fdb0fc28" path="/var/lib/kubelet/pods/4396ce90-2b59-4cba-af25-9121fdb0fc28/volumes" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.957271 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5084c140-9bd7-4bbf-be7c-37270ee768f8" path="/var/lib/kubelet/pods/5084c140-9bd7-4bbf-be7c-37270ee768f8/volumes" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.958516 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f14c23c-cc14-47d9-89aa-b617eecd2d56" path="/var/lib/kubelet/pods/6f14c23c-cc14-47d9-89aa-b617eecd2d56/volumes" Nov 25 08:37:44 crc kubenswrapper[4760]: I1125 08:37:44.959297 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74dabb8e-81e2-4b92-ba89-436f1127473d" path="/var/lib/kubelet/pods/74dabb8e-81e2-4b92-ba89-436f1127473d/volumes" Nov 25 08:37:45 crc kubenswrapper[4760]: I1125 08:37:45.619365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" event={"ID":"e0f1315a-0771-4e60-995c-423c3b5e977a","Type":"ContainerStarted","Data":"be8822c4ad46e4924f20cd8daa767dc548d73b27c8de57c44ff3b9a5c1e71998"} Nov 25 08:37:46 crc kubenswrapper[4760]: I1125 08:37:46.633594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" event={"ID":"e0f1315a-0771-4e60-995c-423c3b5e977a","Type":"ContainerStarted","Data":"f6f64dee971c84aa69ef6091b2e6b8307b0ec963ce6e0ec5d9ecae41da4b60f2"} Nov 25 08:37:46 crc kubenswrapper[4760]: I1125 08:37:46.651870 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" podStartSLOduration=3.121372516 podStartE2EDuration="3.651855687s" podCreationTimestamp="2025-11-25 08:37:43 +0000 UTC" firstStartedPulling="2025-11-25 08:37:44.892900465 +0000 UTC m=+1598.601931260" lastFinishedPulling="2025-11-25 08:37:45.423383636 +0000 UTC m=+1599.132414431" observedRunningTime="2025-11-25 08:37:46.651416134 +0000 UTC m=+1600.360446939" watchObservedRunningTime="2025-11-25 08:37:46.651855687 +0000 UTC m=+1600.360886492" Nov 25 08:37:51 crc kubenswrapper[4760]: I1125 08:37:51.032186 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-w47d2"] Nov 25 08:37:51 crc kubenswrapper[4760]: I1125 08:37:51.041030 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-w47d2"] Nov 25 08:37:52 crc kubenswrapper[4760]: I1125 08:37:52.956480 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6686072-680f-4070-87b2-07c886a28291" path="/var/lib/kubelet/pods/b6686072-680f-4070-87b2-07c886a28291/volumes" Nov 25 08:38:01 crc kubenswrapper[4760]: I1125 08:38:01.746123 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:38:01 crc kubenswrapper[4760]: I1125 08:38:01.746812 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:38:22 crc kubenswrapper[4760]: I1125 08:38:22.962659 4760 generic.go:334] "Generic (PLEG): container finished" podID="e0f1315a-0771-4e60-995c-423c3b5e977a" containerID="f6f64dee971c84aa69ef6091b2e6b8307b0ec963ce6e0ec5d9ecae41da4b60f2" exitCode=0 Nov 25 08:38:22 crc kubenswrapper[4760]: I1125 08:38:22.962771 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" event={"ID":"e0f1315a-0771-4e60-995c-423c3b5e977a","Type":"ContainerDied","Data":"f6f64dee971c84aa69ef6091b2e6b8307b0ec963ce6e0ec5d9ecae41da4b60f2"} Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.422759 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.612882 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-inventory\") pod \"e0f1315a-0771-4e60-995c-423c3b5e977a\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.612959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-ssh-key\") pod \"e0f1315a-0771-4e60-995c-423c3b5e977a\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.613096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsstt\" (UniqueName: \"kubernetes.io/projected/e0f1315a-0771-4e60-995c-423c3b5e977a-kube-api-access-gsstt\") pod \"e0f1315a-0771-4e60-995c-423c3b5e977a\" (UID: \"e0f1315a-0771-4e60-995c-423c3b5e977a\") " Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.620494 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f1315a-0771-4e60-995c-423c3b5e977a-kube-api-access-gsstt" (OuterVolumeSpecName: "kube-api-access-gsstt") pod "e0f1315a-0771-4e60-995c-423c3b5e977a" (UID: "e0f1315a-0771-4e60-995c-423c3b5e977a"). InnerVolumeSpecName "kube-api-access-gsstt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.639998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-inventory" (OuterVolumeSpecName: "inventory") pod "e0f1315a-0771-4e60-995c-423c3b5e977a" (UID: "e0f1315a-0771-4e60-995c-423c3b5e977a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.655406 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0f1315a-0771-4e60-995c-423c3b5e977a" (UID: "e0f1315a-0771-4e60-995c-423c3b5e977a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.716021 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsstt\" (UniqueName: \"kubernetes.io/projected/e0f1315a-0771-4e60-995c-423c3b5e977a-kube-api-access-gsstt\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.716066 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.716091 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0f1315a-0771-4e60-995c-423c3b5e977a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.991864 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" event={"ID":"e0f1315a-0771-4e60-995c-423c3b5e977a","Type":"ContainerDied","Data":"be8822c4ad46e4924f20cd8daa767dc548d73b27c8de57c44ff3b9a5c1e71998"} Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.991921 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8822c4ad46e4924f20cd8daa767dc548d73b27c8de57c44ff3b9a5c1e71998" Nov 25 08:38:24 crc kubenswrapper[4760]: I1125 08:38:24.991926 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.110585 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr"] Nov 25 08:38:25 crc kubenswrapper[4760]: E1125 08:38:25.111138 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f1315a-0771-4e60-995c-423c3b5e977a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.111169 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f1315a-0771-4e60-995c-423c3b5e977a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.111624 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f1315a-0771-4e60-995c-423c3b5e977a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.112768 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.115901 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.116575 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.116789 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.117289 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.126721 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.126778 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.126825 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7sh6\" (UniqueName: \"kubernetes.io/projected/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-kube-api-access-b7sh6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.128631 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr"] Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.228123 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.228182 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.228221 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7sh6\" (UniqueName: \"kubernetes.io/projected/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-kube-api-access-b7sh6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.232830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.233990 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.250621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7sh6\" (UniqueName: \"kubernetes.io/projected/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-kube-api-access-b7sh6\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:25 crc kubenswrapper[4760]: I1125 08:38:25.473489 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:26 crc kubenswrapper[4760]: I1125 08:38:26.046889 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr"] Nov 25 08:38:27 crc kubenswrapper[4760]: I1125 08:38:27.015678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" event={"ID":"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4","Type":"ContainerStarted","Data":"c4a6bea9beecf6b24e70659bdfd5928ff75ee43bf9666b3bd076b1f7dbbce5fd"} Nov 25 08:38:27 crc kubenswrapper[4760]: I1125 08:38:27.015958 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" event={"ID":"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4","Type":"ContainerStarted","Data":"dc58fd5789f1204e647998c4493156c4975bc32afd981fbfd3590d469d456f36"} Nov 25 08:38:27 crc kubenswrapper[4760]: I1125 08:38:27.041308 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" podStartSLOduration=1.663530508 podStartE2EDuration="2.041284198s" podCreationTimestamp="2025-11-25 08:38:25 +0000 UTC" firstStartedPulling="2025-11-25 08:38:26.054943211 +0000 UTC m=+1639.763974036" lastFinishedPulling="2025-11-25 08:38:26.432696931 +0000 UTC m=+1640.141727726" observedRunningTime="2025-11-25 08:38:27.033303769 +0000 UTC m=+1640.742334564" watchObservedRunningTime="2025-11-25 08:38:27.041284198 +0000 UTC m=+1640.750314993" Nov 25 08:38:30 crc kubenswrapper[4760]: I1125 08:38:30.763466 4760 scope.go:117] "RemoveContainer" containerID="082a1d3b7bfd7d975171c03d8c2f49a043d0397a830052b9bf5ee76c2e72e569" Nov 25 08:38:30 crc kubenswrapper[4760]: I1125 08:38:30.800332 4760 scope.go:117] "RemoveContainer" containerID="d684ef9a1354366f3683409404e0180c2f97fb0aeaa031c922a06844b177a1f2" Nov 25 08:38:30 crc kubenswrapper[4760]: I1125 08:38:30.836397 4760 scope.go:117] "RemoveContainer" containerID="00dd4b6c4d2333e86ea4387c4861210196355f871789511c1bda617805e48779" Nov 25 08:38:30 crc kubenswrapper[4760]: I1125 08:38:30.873828 4760 scope.go:117] "RemoveContainer" containerID="ae887807b72417fd7fa33a6c1b1f897826e7f2e2c1b51f530096a4cef78dc7ad" Nov 25 08:38:30 crc kubenswrapper[4760]: I1125 08:38:30.916525 4760 scope.go:117] "RemoveContainer" containerID="f97fb1034575c6317e545d393ef5a3a8b155df265adc7f0cf445a49b85110815" Nov 25 08:38:30 crc kubenswrapper[4760]: I1125 08:38:30.985716 4760 scope.go:117] "RemoveContainer" containerID="061b9ba1c22a8cbe27b653389547ce134aa0ea069badec34203a07e49eb9f48e" Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.050885 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5zhtm"] Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.057789 4760 scope.go:117] "RemoveContainer" containerID="13d92cca116417133398ad6f495b7a2ae8826d5038b30aad12f9c5ea106afd79" Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.058033 4760 generic.go:334] "Generic (PLEG): container finished" podID="f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" containerID="c4a6bea9beecf6b24e70659bdfd5928ff75ee43bf9666b3bd076b1f7dbbce5fd" exitCode=0 Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.058136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" event={"ID":"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4","Type":"ContainerDied","Data":"c4a6bea9beecf6b24e70659bdfd5928ff75ee43bf9666b3bd076b1f7dbbce5fd"} Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.065649 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5zhtm"] Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.080955 4760 scope.go:117] "RemoveContainer" containerID="b9f78ba9515147a8e5672ba6413b9ff3bc88109bc2a151ee340fcfcb12db5934" Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.746090 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:38:31 crc kubenswrapper[4760]: I1125 08:38:31.746443 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.540402 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.670028 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7sh6\" (UniqueName: \"kubernetes.io/projected/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-kube-api-access-b7sh6\") pod \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.671049 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-inventory\") pod \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.671220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-ssh-key\") pod \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\" (UID: \"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4\") " Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.677985 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-kube-api-access-b7sh6" (OuterVolumeSpecName: "kube-api-access-b7sh6") pod "f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" (UID: "f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4"). InnerVolumeSpecName "kube-api-access-b7sh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.710573 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-inventory" (OuterVolumeSpecName: "inventory") pod "f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" (UID: "f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.714232 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" (UID: "f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.774364 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7sh6\" (UniqueName: \"kubernetes.io/projected/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-kube-api-access-b7sh6\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.774400 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.774409 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:32 crc kubenswrapper[4760]: I1125 08:38:32.951456 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5394304b-1d0b-496b-9c30-383d1822341a" path="/var/lib/kubelet/pods/5394304b-1d0b-496b-9c30-383d1822341a/volumes" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.096288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" event={"ID":"f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4","Type":"ContainerDied","Data":"dc58fd5789f1204e647998c4493156c4975bc32afd981fbfd3590d469d456f36"} Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.096347 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc58fd5789f1204e647998c4493156c4975bc32afd981fbfd3590d469d456f36" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.096357 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.147632 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9"] Nov 25 08:38:33 crc kubenswrapper[4760]: E1125 08:38:33.148133 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.148160 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.148437 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.149169 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.151533 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.153803 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.154576 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.155014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.158536 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9"] Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.181299 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.181532 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896cr\" (UniqueName: \"kubernetes.io/projected/05fa4b33-309d-45cd-be1f-4d8e313a1f60-kube-api-access-896cr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.181610 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.282965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-896cr\" (UniqueName: \"kubernetes.io/projected/05fa4b33-309d-45cd-be1f-4d8e313a1f60-kube-api-access-896cr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.283057 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.283126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.287687 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.288167 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.301358 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-896cr\" (UniqueName: \"kubernetes.io/projected/05fa4b33-309d-45cd-be1f-4d8e313a1f60-kube-api-access-896cr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.466382 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:38:33 crc kubenswrapper[4760]: I1125 08:38:33.999420 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9"] Nov 25 08:38:34 crc kubenswrapper[4760]: W1125 08:38:34.006871 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05fa4b33_309d_45cd_be1f_4d8e313a1f60.slice/crio-3e2757f57d074ef867bff397d23d05334447a32b4abfe33700c536cf7a1818d2 WatchSource:0}: Error finding container 3e2757f57d074ef867bff397d23d05334447a32b4abfe33700c536cf7a1818d2: Status 404 returned error can't find the container with id 3e2757f57d074ef867bff397d23d05334447a32b4abfe33700c536cf7a1818d2 Nov 25 08:38:34 crc kubenswrapper[4760]: I1125 08:38:34.109824 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" event={"ID":"05fa4b33-309d-45cd-be1f-4d8e313a1f60","Type":"ContainerStarted","Data":"3e2757f57d074ef867bff397d23d05334447a32b4abfe33700c536cf7a1818d2"} Nov 25 08:38:35 crc kubenswrapper[4760]: I1125 08:38:35.121473 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" event={"ID":"05fa4b33-309d-45cd-be1f-4d8e313a1f60","Type":"ContainerStarted","Data":"839f652e3ca5e9821607b0b50bab056b4f3e327207570d48def85fff909b4c08"} Nov 25 08:38:35 crc kubenswrapper[4760]: I1125 08:38:35.148723 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" podStartSLOduration=1.747661806 podStartE2EDuration="2.148688782s" podCreationTimestamp="2025-11-25 08:38:33 +0000 UTC" firstStartedPulling="2025-11-25 08:38:34.011561153 +0000 UTC m=+1647.720591958" lastFinishedPulling="2025-11-25 08:38:34.412588139 +0000 UTC m=+1648.121618934" observedRunningTime="2025-11-25 08:38:35.14338939 +0000 UTC m=+1648.852420205" watchObservedRunningTime="2025-11-25 08:38:35.148688782 +0000 UTC m=+1648.857719597" Nov 25 08:38:37 crc kubenswrapper[4760]: I1125 08:38:37.032237 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2s8lr"] Nov 25 08:38:37 crc kubenswrapper[4760]: I1125 08:38:37.043061 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hhsz8"] Nov 25 08:38:37 crc kubenswrapper[4760]: I1125 08:38:37.053867 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2s8lr"] Nov 25 08:38:37 crc kubenswrapper[4760]: I1125 08:38:37.062715 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wrwr6"] Nov 25 08:38:37 crc kubenswrapper[4760]: I1125 08:38:37.071685 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hhsz8"] Nov 25 08:38:37 crc kubenswrapper[4760]: I1125 08:38:37.079692 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wrwr6"] Nov 25 08:38:38 crc kubenswrapper[4760]: I1125 08:38:38.948848 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd46062-7573-4651-a59d-f32a136433b8" path="/var/lib/kubelet/pods/2bd46062-7573-4651-a59d-f32a136433b8/volumes" Nov 25 08:38:38 crc kubenswrapper[4760]: I1125 08:38:38.949789 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409e55ac-7906-4f67-ba89-f823a28796a5" path="/var/lib/kubelet/pods/409e55ac-7906-4f67-ba89-f823a28796a5/volumes" Nov 25 08:38:38 crc kubenswrapper[4760]: I1125 08:38:38.950761 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62eb64aa-dbc6-49d6-b8ab-8fffda94afa6" path="/var/lib/kubelet/pods/62eb64aa-dbc6-49d6-b8ab-8fffda94afa6/volumes" Nov 25 08:38:55 crc kubenswrapper[4760]: I1125 08:38:55.039211 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pk2zm"] Nov 25 08:38:55 crc kubenswrapper[4760]: I1125 08:38:55.047978 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pk2zm"] Nov 25 08:38:56 crc kubenswrapper[4760]: I1125 08:38:56.950073 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99920db5-d382-4159-a705-53428f8a61a8" path="/var/lib/kubelet/pods/99920db5-d382-4159-a705-53428f8a61a8/volumes" Nov 25 08:39:01 crc kubenswrapper[4760]: I1125 08:39:01.746811 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:39:01 crc kubenswrapper[4760]: I1125 08:39:01.747166 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:39:01 crc kubenswrapper[4760]: I1125 08:39:01.747275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:39:01 crc kubenswrapper[4760]: I1125 08:39:01.748073 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:39:01 crc kubenswrapper[4760]: I1125 08:39:01.748156 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" gracePeriod=600 Nov 25 08:39:01 crc kubenswrapper[4760]: E1125 08:39:01.876575 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:39:02 crc kubenswrapper[4760]: I1125 08:39:02.421849 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" exitCode=0 Nov 25 08:39:02 crc kubenswrapper[4760]: I1125 08:39:02.421891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c"} Nov 25 08:39:02 crc kubenswrapper[4760]: I1125 08:39:02.421923 4760 scope.go:117] "RemoveContainer" containerID="ca52788d396deaeb74b41a0b267f55e1f30d7a61af988b5f3d847e16dbb9f1b0" Nov 25 08:39:02 crc kubenswrapper[4760]: I1125 08:39:02.422885 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:39:02 crc kubenswrapper[4760]: E1125 08:39:02.423469 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:39:14 crc kubenswrapper[4760]: I1125 08:39:14.938879 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:39:14 crc kubenswrapper[4760]: E1125 08:39:14.940612 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:39:25 crc kubenswrapper[4760]: I1125 08:39:25.624696 4760 generic.go:334] "Generic (PLEG): container finished" podID="05fa4b33-309d-45cd-be1f-4d8e313a1f60" containerID="839f652e3ca5e9821607b0b50bab056b4f3e327207570d48def85fff909b4c08" exitCode=0 Nov 25 08:39:25 crc kubenswrapper[4760]: I1125 08:39:25.624776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" event={"ID":"05fa4b33-309d-45cd-be1f-4d8e313a1f60","Type":"ContainerDied","Data":"839f652e3ca5e9821607b0b50bab056b4f3e327207570d48def85fff909b4c08"} Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.066462 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.201618 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-ssh-key\") pod \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.202021 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-inventory\") pod \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.202092 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-896cr\" (UniqueName: \"kubernetes.io/projected/05fa4b33-309d-45cd-be1f-4d8e313a1f60-kube-api-access-896cr\") pod \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\" (UID: \"05fa4b33-309d-45cd-be1f-4d8e313a1f60\") " Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.207603 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fa4b33-309d-45cd-be1f-4d8e313a1f60-kube-api-access-896cr" (OuterVolumeSpecName: "kube-api-access-896cr") pod "05fa4b33-309d-45cd-be1f-4d8e313a1f60" (UID: "05fa4b33-309d-45cd-be1f-4d8e313a1f60"). InnerVolumeSpecName "kube-api-access-896cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.231223 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-inventory" (OuterVolumeSpecName: "inventory") pod "05fa4b33-309d-45cd-be1f-4d8e313a1f60" (UID: "05fa4b33-309d-45cd-be1f-4d8e313a1f60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.239434 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05fa4b33-309d-45cd-be1f-4d8e313a1f60" (UID: "05fa4b33-309d-45cd-be1f-4d8e313a1f60"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.304869 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.304906 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05fa4b33-309d-45cd-be1f-4d8e313a1f60-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.304920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-896cr\" (UniqueName: \"kubernetes.io/projected/05fa4b33-309d-45cd-be1f-4d8e313a1f60-kube-api-access-896cr\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.645711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" event={"ID":"05fa4b33-309d-45cd-be1f-4d8e313a1f60","Type":"ContainerDied","Data":"3e2757f57d074ef867bff397d23d05334447a32b4abfe33700c536cf7a1818d2"} Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.645745 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e2757f57d074ef867bff397d23d05334447a32b4abfe33700c536cf7a1818d2" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.646078 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.739450 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4cj5"] Nov 25 08:39:27 crc kubenswrapper[4760]: E1125 08:39:27.739940 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fa4b33-309d-45cd-be1f-4d8e313a1f60" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.739969 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fa4b33-309d-45cd-be1f-4d8e313a1f60" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.740219 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fa4b33-309d-45cd-be1f-4d8e313a1f60" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.741144 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.745079 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.745520 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.745736 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.746571 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.749188 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4cj5"] Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.813984 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5zjv\" (UniqueName: \"kubernetes.io/projected/a2b823ad-88b0-4ee6-a666-c19abc17b99a-kube-api-access-c5zjv\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.814104 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.814135 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.915376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5zjv\" (UniqueName: \"kubernetes.io/projected/a2b823ad-88b0-4ee6-a666-c19abc17b99a-kube-api-access-c5zjv\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.915482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.915511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.920216 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.924806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.936905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5zjv\" (UniqueName: \"kubernetes.io/projected/a2b823ad-88b0-4ee6-a666-c19abc17b99a-kube-api-access-c5zjv\") pod \"ssh-known-hosts-edpm-deployment-m4cj5\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:27 crc kubenswrapper[4760]: I1125 08:39:27.938363 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:39:27 crc kubenswrapper[4760]: E1125 08:39:27.938958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:39:28 crc kubenswrapper[4760]: I1125 08:39:28.068541 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:28 crc kubenswrapper[4760]: I1125 08:39:28.597380 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4cj5"] Nov 25 08:39:28 crc kubenswrapper[4760]: W1125 08:39:28.602094 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2b823ad_88b0_4ee6_a666_c19abc17b99a.slice/crio-63e27f4c49a33d929ea5c009d163ad6164ec2610c651af9e051e69de08623549 WatchSource:0}: Error finding container 63e27f4c49a33d929ea5c009d163ad6164ec2610c651af9e051e69de08623549: Status 404 returned error can't find the container with id 63e27f4c49a33d929ea5c009d163ad6164ec2610c651af9e051e69de08623549 Nov 25 08:39:28 crc kubenswrapper[4760]: I1125 08:39:28.655703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" event={"ID":"a2b823ad-88b0-4ee6-a666-c19abc17b99a","Type":"ContainerStarted","Data":"63e27f4c49a33d929ea5c009d163ad6164ec2610c651af9e051e69de08623549"} Nov 25 08:39:29 crc kubenswrapper[4760]: I1125 08:39:29.664985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" event={"ID":"a2b823ad-88b0-4ee6-a666-c19abc17b99a","Type":"ContainerStarted","Data":"d2cd33877df8d74066d5b22351319bd21ec434011979859eb977b6971c5ff3c0"} Nov 25 08:39:29 crc kubenswrapper[4760]: I1125 08:39:29.694510 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" podStartSLOduration=2.245157235 podStartE2EDuration="2.694489896s" podCreationTimestamp="2025-11-25 08:39:27 +0000 UTC" firstStartedPulling="2025-11-25 08:39:28.604268932 +0000 UTC m=+1702.313299747" lastFinishedPulling="2025-11-25 08:39:29.053601613 +0000 UTC m=+1702.762632408" observedRunningTime="2025-11-25 08:39:29.682291347 +0000 UTC m=+1703.391322162" watchObservedRunningTime="2025-11-25 08:39:29.694489896 +0000 UTC m=+1703.403520701" Nov 25 08:39:31 crc kubenswrapper[4760]: I1125 08:39:31.226783 4760 scope.go:117] "RemoveContainer" containerID="4d3668a9f563fd64a7677aaabdab8e137fa20c640ba55e543801942cdf02eb1a" Nov 25 08:39:31 crc kubenswrapper[4760]: I1125 08:39:31.287059 4760 scope.go:117] "RemoveContainer" containerID="ef00cb0f2c9d7a1457a895997b1430dc50e50688832895c39ed22244c166088d" Nov 25 08:39:31 crc kubenswrapper[4760]: I1125 08:39:31.333896 4760 scope.go:117] "RemoveContainer" containerID="8b52c754a29627617f737cb3ed7b115a0a7494c96d3e266b2719d8e4dac85d8c" Nov 25 08:39:31 crc kubenswrapper[4760]: I1125 08:39:31.392930 4760 scope.go:117] "RemoveContainer" containerID="eefb30894d640e7c8009088b0ec1b6f61c9f9b96d25fb9f785cf880b97d2c7f5" Nov 25 08:39:31 crc kubenswrapper[4760]: I1125 08:39:31.435442 4760 scope.go:117] "RemoveContainer" containerID="bb61ac46168e741100342fbac117cf81c11118cea45d5591b125d12a72af1ccf" Nov 25 08:39:36 crc kubenswrapper[4760]: I1125 08:39:36.734103 4760 generic.go:334] "Generic (PLEG): container finished" podID="a2b823ad-88b0-4ee6-a666-c19abc17b99a" containerID="d2cd33877df8d74066d5b22351319bd21ec434011979859eb977b6971c5ff3c0" exitCode=0 Nov 25 08:39:36 crc kubenswrapper[4760]: I1125 08:39:36.734203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" event={"ID":"a2b823ad-88b0-4ee6-a666-c19abc17b99a","Type":"ContainerDied","Data":"d2cd33877df8d74066d5b22351319bd21ec434011979859eb977b6971c5ff3c0"} Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.139863 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.223797 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5zjv\" (UniqueName: \"kubernetes.io/projected/a2b823ad-88b0-4ee6-a666-c19abc17b99a-kube-api-access-c5zjv\") pod \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.224152 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-inventory-0\") pod \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.224184 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-ssh-key-openstack-edpm-ipam\") pod \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\" (UID: \"a2b823ad-88b0-4ee6-a666-c19abc17b99a\") " Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.229295 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b823ad-88b0-4ee6-a666-c19abc17b99a-kube-api-access-c5zjv" (OuterVolumeSpecName: "kube-api-access-c5zjv") pod "a2b823ad-88b0-4ee6-a666-c19abc17b99a" (UID: "a2b823ad-88b0-4ee6-a666-c19abc17b99a"). InnerVolumeSpecName "kube-api-access-c5zjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.255717 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a2b823ad-88b0-4ee6-a666-c19abc17b99a" (UID: "a2b823ad-88b0-4ee6-a666-c19abc17b99a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.271729 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a2b823ad-88b0-4ee6-a666-c19abc17b99a" (UID: "a2b823ad-88b0-4ee6-a666-c19abc17b99a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.326884 4760 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.326929 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2b823ad-88b0-4ee6-a666-c19abc17b99a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.326940 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5zjv\" (UniqueName: \"kubernetes.io/projected/a2b823ad-88b0-4ee6-a666-c19abc17b99a-kube-api-access-c5zjv\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.759494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" event={"ID":"a2b823ad-88b0-4ee6-a666-c19abc17b99a","Type":"ContainerDied","Data":"63e27f4c49a33d929ea5c009d163ad6164ec2610c651af9e051e69de08623549"} Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.759531 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63e27f4c49a33d929ea5c009d163ad6164ec2610c651af9e051e69de08623549" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.759592 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-m4cj5" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.836157 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr"] Nov 25 08:39:38 crc kubenswrapper[4760]: E1125 08:39:38.836700 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b823ad-88b0-4ee6-a666-c19abc17b99a" containerName="ssh-known-hosts-edpm-deployment" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.836726 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b823ad-88b0-4ee6-a666-c19abc17b99a" containerName="ssh-known-hosts-edpm-deployment" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.836956 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b823ad-88b0-4ee6-a666-c19abc17b99a" containerName="ssh-known-hosts-edpm-deployment" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.837730 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.841462 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.841618 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.841767 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.842613 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.848173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr"] Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.937563 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46zq\" (UniqueName: \"kubernetes.io/projected/ee204c5b-9af1-49ba-9481-5bf11f90db8a-kube-api-access-c46zq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.937648 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:38 crc kubenswrapper[4760]: I1125 08:39:38.937674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.039726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46zq\" (UniqueName: \"kubernetes.io/projected/ee204c5b-9af1-49ba-9481-5bf11f90db8a-kube-api-access-c46zq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.039845 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.039880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.040570 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2zt27"] Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.049219 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mkm9v"] Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.055184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.055237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.057635 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46zq\" (UniqueName: \"kubernetes.io/projected/ee204c5b-9af1-49ba-9481-5bf11f90db8a-kube-api-access-c46zq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5tzsr\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.058931 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-388b-account-create-h5j6g"] Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.065426 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2zt27"] Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.071133 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-388b-account-create-h5j6g"] Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.076600 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mkm9v"] Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.170961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.722557 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr"] Nov 25 08:39:39 crc kubenswrapper[4760]: I1125 08:39:39.771064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" event={"ID":"ee204c5b-9af1-49ba-9481-5bf11f90db8a","Type":"ContainerStarted","Data":"8dc9eecd06dfb282cf994e8997fcf0410f06c9a7c873444e81982bd878e85d78"} Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.032668 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-skw7b"] Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.050229 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bb66-account-create-r9w4q"] Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.057156 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d8c3-account-create-9hlqt"] Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.063618 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-skw7b"] Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.070339 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bb66-account-create-r9w4q"] Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.076977 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d8c3-account-create-9hlqt"] Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.784779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" event={"ID":"ee204c5b-9af1-49ba-9481-5bf11f90db8a","Type":"ContainerStarted","Data":"27ef4b57f7b4a81bec11827a9c42b0ae75d40d6858072b66bcb3b6f3280efdd3"} Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.812846 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" podStartSLOduration=2.399975343 podStartE2EDuration="2.812808118s" podCreationTimestamp="2025-11-25 08:39:38 +0000 UTC" firstStartedPulling="2025-11-25 08:39:39.725537928 +0000 UTC m=+1713.434568763" lastFinishedPulling="2025-11-25 08:39:40.138370753 +0000 UTC m=+1713.847401538" observedRunningTime="2025-11-25 08:39:40.807080184 +0000 UTC m=+1714.516110979" watchObservedRunningTime="2025-11-25 08:39:40.812808118 +0000 UTC m=+1714.521838913" Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.965592 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0993c794-4a24-476a-b473-ea84948835cd" path="/var/lib/kubelet/pods/0993c794-4a24-476a-b473-ea84948835cd/volumes" Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.966380 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ba00a9-0675-4154-8db7-a3dec9528ce1" path="/var/lib/kubelet/pods/24ba00a9-0675-4154-8db7-a3dec9528ce1/volumes" Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.967185 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff4c392-598d-40ec-8803-d97ca2429c37" path="/var/lib/kubelet/pods/4ff4c392-598d-40ec-8803-d97ca2429c37/volumes" Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.967760 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="637b4ab8-7e6b-4068-993c-5dc8f5975b93" path="/var/lib/kubelet/pods/637b4ab8-7e6b-4068-993c-5dc8f5975b93/volumes" Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.969398 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8065060f-1c06-4186-8a41-e864d9256d7b" path="/var/lib/kubelet/pods/8065060f-1c06-4186-8a41-e864d9256d7b/volumes" Nov 25 08:39:40 crc kubenswrapper[4760]: I1125 08:39:40.970083 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee850d3-ea88-45ef-9a47-56cfe91d2c36" path="/var/lib/kubelet/pods/fee850d3-ea88-45ef-9a47-56cfe91d2c36/volumes" Nov 25 08:39:42 crc kubenswrapper[4760]: I1125 08:39:42.939284 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:39:42 crc kubenswrapper[4760]: E1125 08:39:42.940881 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:39:48 crc kubenswrapper[4760]: I1125 08:39:48.856844 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee204c5b-9af1-49ba-9481-5bf11f90db8a" containerID="27ef4b57f7b4a81bec11827a9c42b0ae75d40d6858072b66bcb3b6f3280efdd3" exitCode=0 Nov 25 08:39:48 crc kubenswrapper[4760]: I1125 08:39:48.856936 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" event={"ID":"ee204c5b-9af1-49ba-9481-5bf11f90db8a","Type":"ContainerDied","Data":"27ef4b57f7b4a81bec11827a9c42b0ae75d40d6858072b66bcb3b6f3280efdd3"} Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.336599 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.355705 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-ssh-key\") pod \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.355829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-inventory\") pod \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.355897 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c46zq\" (UniqueName: \"kubernetes.io/projected/ee204c5b-9af1-49ba-9481-5bf11f90db8a-kube-api-access-c46zq\") pod \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\" (UID: \"ee204c5b-9af1-49ba-9481-5bf11f90db8a\") " Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.398560 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee204c5b-9af1-49ba-9481-5bf11f90db8a-kube-api-access-c46zq" (OuterVolumeSpecName: "kube-api-access-c46zq") pod "ee204c5b-9af1-49ba-9481-5bf11f90db8a" (UID: "ee204c5b-9af1-49ba-9481-5bf11f90db8a"). InnerVolumeSpecName "kube-api-access-c46zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.402870 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-inventory" (OuterVolumeSpecName: "inventory") pod "ee204c5b-9af1-49ba-9481-5bf11f90db8a" (UID: "ee204c5b-9af1-49ba-9481-5bf11f90db8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.420274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ee204c5b-9af1-49ba-9481-5bf11f90db8a" (UID: "ee204c5b-9af1-49ba-9481-5bf11f90db8a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.457765 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.457969 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee204c5b-9af1-49ba-9481-5bf11f90db8a-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.458086 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c46zq\" (UniqueName: \"kubernetes.io/projected/ee204c5b-9af1-49ba-9481-5bf11f90db8a-kube-api-access-c46zq\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.878974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" event={"ID":"ee204c5b-9af1-49ba-9481-5bf11f90db8a","Type":"ContainerDied","Data":"8dc9eecd06dfb282cf994e8997fcf0410f06c9a7c873444e81982bd878e85d78"} Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.879024 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dc9eecd06dfb282cf994e8997fcf0410f06c9a7c873444e81982bd878e85d78" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.879103 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.975870 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2"] Nov 25 08:39:50 crc kubenswrapper[4760]: E1125 08:39:50.976474 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee204c5b-9af1-49ba-9481-5bf11f90db8a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.976497 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee204c5b-9af1-49ba-9481-5bf11f90db8a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.976719 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee204c5b-9af1-49ba-9481-5bf11f90db8a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.977508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.979515 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.980635 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.981405 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.985566 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2"] Nov 25 08:39:50 crc kubenswrapper[4760]: I1125 08:39:50.988616 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.071699 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csznb\" (UniqueName: \"kubernetes.io/projected/fce7adff-d9be-4eb1-a330-d365b2ba877b-kube-api-access-csznb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.071807 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.071925 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.173109 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csznb\" (UniqueName: \"kubernetes.io/projected/fce7adff-d9be-4eb1-a330-d365b2ba877b-kube-api-access-csznb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.173295 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.173988 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.178431 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.179922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.205037 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csznb\" (UniqueName: \"kubernetes.io/projected/fce7adff-d9be-4eb1-a330-d365b2ba877b-kube-api-access-csznb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.300140 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.796944 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2"] Nov 25 08:39:51 crc kubenswrapper[4760]: W1125 08:39:51.803451 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfce7adff_d9be_4eb1_a330_d365b2ba877b.slice/crio-cce8b1e59b1de8f574ea91a7e407bc98cfed7f3bb7160e7dc8c9961c305621d0 WatchSource:0}: Error finding container cce8b1e59b1de8f574ea91a7e407bc98cfed7f3bb7160e7dc8c9961c305621d0: Status 404 returned error can't find the container with id cce8b1e59b1de8f574ea91a7e407bc98cfed7f3bb7160e7dc8c9961c305621d0 Nov 25 08:39:51 crc kubenswrapper[4760]: I1125 08:39:51.888104 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" event={"ID":"fce7adff-d9be-4eb1-a330-d365b2ba877b","Type":"ContainerStarted","Data":"cce8b1e59b1de8f574ea91a7e407bc98cfed7f3bb7160e7dc8c9961c305621d0"} Nov 25 08:39:52 crc kubenswrapper[4760]: I1125 08:39:52.902575 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" event={"ID":"fce7adff-d9be-4eb1-a330-d365b2ba877b","Type":"ContainerStarted","Data":"377c81043b3f2a193d6f5a122cb265765a5950bdc68041d927959a8259bafc57"} Nov 25 08:39:52 crc kubenswrapper[4760]: I1125 08:39:52.934481 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" podStartSLOduration=2.447231106 podStartE2EDuration="2.934450613s" podCreationTimestamp="2025-11-25 08:39:50 +0000 UTC" firstStartedPulling="2025-11-25 08:39:51.805725625 +0000 UTC m=+1725.514756410" lastFinishedPulling="2025-11-25 08:39:52.292945122 +0000 UTC m=+1726.001975917" observedRunningTime="2025-11-25 08:39:52.932114936 +0000 UTC m=+1726.641145731" watchObservedRunningTime="2025-11-25 08:39:52.934450613 +0000 UTC m=+1726.643481418" Nov 25 08:39:54 crc kubenswrapper[4760]: I1125 08:39:54.938772 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:39:54 crc kubenswrapper[4760]: E1125 08:39:54.939337 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:40:01 crc kubenswrapper[4760]: I1125 08:40:01.994086 4760 generic.go:334] "Generic (PLEG): container finished" podID="fce7adff-d9be-4eb1-a330-d365b2ba877b" containerID="377c81043b3f2a193d6f5a122cb265765a5950bdc68041d927959a8259bafc57" exitCode=0 Nov 25 08:40:01 crc kubenswrapper[4760]: I1125 08:40:01.994185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" event={"ID":"fce7adff-d9be-4eb1-a330-d365b2ba877b","Type":"ContainerDied","Data":"377c81043b3f2a193d6f5a122cb265765a5950bdc68041d927959a8259bafc57"} Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.435698 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.513819 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-ssh-key\") pod \"fce7adff-d9be-4eb1-a330-d365b2ba877b\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.513987 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-inventory\") pod \"fce7adff-d9be-4eb1-a330-d365b2ba877b\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.514151 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csznb\" (UniqueName: \"kubernetes.io/projected/fce7adff-d9be-4eb1-a330-d365b2ba877b-kube-api-access-csznb\") pod \"fce7adff-d9be-4eb1-a330-d365b2ba877b\" (UID: \"fce7adff-d9be-4eb1-a330-d365b2ba877b\") " Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.520578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce7adff-d9be-4eb1-a330-d365b2ba877b-kube-api-access-csznb" (OuterVolumeSpecName: "kube-api-access-csznb") pod "fce7adff-d9be-4eb1-a330-d365b2ba877b" (UID: "fce7adff-d9be-4eb1-a330-d365b2ba877b"). InnerVolumeSpecName "kube-api-access-csznb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.541639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fce7adff-d9be-4eb1-a330-d365b2ba877b" (UID: "fce7adff-d9be-4eb1-a330-d365b2ba877b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.555408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-inventory" (OuterVolumeSpecName: "inventory") pod "fce7adff-d9be-4eb1-a330-d365b2ba877b" (UID: "fce7adff-d9be-4eb1-a330-d365b2ba877b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.615521 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.615586 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csznb\" (UniqueName: \"kubernetes.io/projected/fce7adff-d9be-4eb1-a330-d365b2ba877b-kube-api-access-csznb\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:03 crc kubenswrapper[4760]: I1125 08:40:03.615603 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fce7adff-d9be-4eb1-a330-d365b2ba877b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:04 crc kubenswrapper[4760]: I1125 08:40:04.012049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" event={"ID":"fce7adff-d9be-4eb1-a330-d365b2ba877b","Type":"ContainerDied","Data":"cce8b1e59b1de8f574ea91a7e407bc98cfed7f3bb7160e7dc8c9961c305621d0"} Nov 25 08:40:04 crc kubenswrapper[4760]: I1125 08:40:04.012106 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce8b1e59b1de8f574ea91a7e407bc98cfed7f3bb7160e7dc8c9961c305621d0" Nov 25 08:40:04 crc kubenswrapper[4760]: I1125 08:40:04.012081 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2" Nov 25 08:40:06 crc kubenswrapper[4760]: I1125 08:40:06.042817 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gvr9l"] Nov 25 08:40:06 crc kubenswrapper[4760]: I1125 08:40:06.049365 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gvr9l"] Nov 25 08:40:06 crc kubenswrapper[4760]: I1125 08:40:06.953863 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f30b80-e115-44c9-8995-f09ee775ce7b" path="/var/lib/kubelet/pods/36f30b80-e115-44c9-8995-f09ee775ce7b/volumes" Nov 25 08:40:07 crc kubenswrapper[4760]: I1125 08:40:07.939216 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:40:07 crc kubenswrapper[4760]: E1125 08:40:07.939817 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:40:20 crc kubenswrapper[4760]: I1125 08:40:20.939120 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:40:20 crc kubenswrapper[4760]: E1125 08:40:20.940107 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:40:29 crc kubenswrapper[4760]: I1125 08:40:29.041185 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zv989"] Nov 25 08:40:29 crc kubenswrapper[4760]: I1125 08:40:29.047965 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zv989"] Nov 25 08:40:30 crc kubenswrapper[4760]: I1125 08:40:30.041615 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dq2fl"] Nov 25 08:40:30 crc kubenswrapper[4760]: I1125 08:40:30.052092 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dq2fl"] Nov 25 08:40:30 crc kubenswrapper[4760]: I1125 08:40:30.952351 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152d5f92-3188-4d96-8594-455aacbb0e4a" path="/var/lib/kubelet/pods/152d5f92-3188-4d96-8594-455aacbb0e4a/volumes" Nov 25 08:40:30 crc kubenswrapper[4760]: I1125 08:40:30.953868 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44f13d4-c189-4609-944a-3dbaaee53e6b" path="/var/lib/kubelet/pods/c44f13d4-c189-4609-944a-3dbaaee53e6b/volumes" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.553513 4760 scope.go:117] "RemoveContainer" containerID="00e9f2f7a936579eb5e2e1aace18d53d59e3e30971949b53140b71e564714482" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.588638 4760 scope.go:117] "RemoveContainer" containerID="28841002acc8c9cc96afadf2043270921dd3c06c81a4d376e463e05c33d9208b" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.665489 4760 scope.go:117] "RemoveContainer" containerID="02ab8fb2b82832f3f33f5094dbdcde15c49e3b4e13d95978d9fac864ecb65acb" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.729032 4760 scope.go:117] "RemoveContainer" containerID="3e6b0169a360a96744b553fc190d26554e8e2264d7eb67351fe738196ade51bb" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.766771 4760 scope.go:117] "RemoveContainer" containerID="0fc31a4aea11467b98541fdd66687da138fb33c403417abf6c44e3d343da5fce" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.831990 4760 scope.go:117] "RemoveContainer" containerID="bc4bdd4adfc52a3a2f44d4963b4aa3c3062ed598f9f8cc44350bafca0ccdfe2a" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.852952 4760 scope.go:117] "RemoveContainer" containerID="b28632d12bc38d13a25dc1f56ef8f3c8e1dc901574857179c4ed50b4a6e4276b" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.883457 4760 scope.go:117] "RemoveContainer" containerID="b7a405f44808bc3841f17df9cd22edc34afc8f2c2797e3cde506423b5dd0b306" Nov 25 08:40:31 crc kubenswrapper[4760]: I1125 08:40:31.901196 4760 scope.go:117] "RemoveContainer" containerID="49049853a94d1f10b388fdd15cdd1b37778a3435229c40fc9e75dd19ea42d278" Nov 25 08:40:34 crc kubenswrapper[4760]: I1125 08:40:34.939116 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:40:34 crc kubenswrapper[4760]: E1125 08:40:34.939848 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:40:46 crc kubenswrapper[4760]: I1125 08:40:46.944548 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:40:46 crc kubenswrapper[4760]: E1125 08:40:46.945226 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:41:00 crc kubenswrapper[4760]: I1125 08:41:00.939403 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:41:00 crc kubenswrapper[4760]: E1125 08:41:00.940456 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:41:14 crc kubenswrapper[4760]: I1125 08:41:14.048742 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r8p7"] Nov 25 08:41:14 crc kubenswrapper[4760]: I1125 08:41:14.057496 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r8p7"] Nov 25 08:41:14 crc kubenswrapper[4760]: I1125 08:41:14.962121 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc08500-3352-47d3-97f8-c269676edd00" path="/var/lib/kubelet/pods/7cc08500-3352-47d3-97f8-c269676edd00/volumes" Nov 25 08:41:15 crc kubenswrapper[4760]: I1125 08:41:15.938576 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:41:15 crc kubenswrapper[4760]: E1125 08:41:15.939103 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:41:29 crc kubenswrapper[4760]: I1125 08:41:29.939110 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:41:29 crc kubenswrapper[4760]: E1125 08:41:29.939826 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:41:32 crc kubenswrapper[4760]: I1125 08:41:32.058539 4760 scope.go:117] "RemoveContainer" containerID="276fb59c2db16b6cfacf73005e89578f1431cc21ab51639bb8d7463f11c3c746" Nov 25 08:41:44 crc kubenswrapper[4760]: I1125 08:41:44.938475 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:41:44 crc kubenswrapper[4760]: E1125 08:41:44.940154 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:41:57 crc kubenswrapper[4760]: I1125 08:41:57.939345 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:41:57 crc kubenswrapper[4760]: E1125 08:41:57.940128 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:42:09 crc kubenswrapper[4760]: I1125 08:42:09.938700 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:42:09 crc kubenswrapper[4760]: E1125 08:42:09.939446 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:42:22 crc kubenswrapper[4760]: I1125 08:42:22.940030 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:42:22 crc kubenswrapper[4760]: E1125 08:42:22.940835 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:42:33 crc kubenswrapper[4760]: I1125 08:42:33.939690 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:42:33 crc kubenswrapper[4760]: E1125 08:42:33.941496 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:42:48 crc kubenswrapper[4760]: I1125 08:42:48.939522 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:42:48 crc kubenswrapper[4760]: E1125 08:42:48.940345 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:43:01 crc kubenswrapper[4760]: I1125 08:43:01.938113 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:43:01 crc kubenswrapper[4760]: E1125 08:43:01.938894 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:43:16 crc kubenswrapper[4760]: I1125 08:43:16.944317 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:43:16 crc kubenswrapper[4760]: E1125 08:43:16.945015 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:43:29 crc kubenswrapper[4760]: I1125 08:43:29.938893 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:43:29 crc kubenswrapper[4760]: E1125 08:43:29.941305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:43:41 crc kubenswrapper[4760]: I1125 08:43:41.938899 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:43:41 crc kubenswrapper[4760]: E1125 08:43:41.939605 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:43:55 crc kubenswrapper[4760]: I1125 08:43:55.938783 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:43:55 crc kubenswrapper[4760]: E1125 08:43:55.939659 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:44:09 crc kubenswrapper[4760]: I1125 08:44:09.939096 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:44:10 crc kubenswrapper[4760]: I1125 08:44:10.202424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"a867ce918c353a52f7d744d4ae5764d73a3af9c88d9c5804bb0260064416eb30"} Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.317687 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttwnc"] Nov 25 08:44:34 crc kubenswrapper[4760]: E1125 08:44:34.318715 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce7adff-d9be-4eb1-a330-d365b2ba877b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.318735 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce7adff-d9be-4eb1-a330-d365b2ba877b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.318928 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce7adff-d9be-4eb1-a330-d365b2ba877b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.320298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.326207 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttwnc"] Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.326532 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8e687e-f18e-4f36-aefc-59c644196614-utilities\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.326871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8e687e-f18e-4f36-aefc-59c644196614-catalog-content\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.327091 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5tb7\" (UniqueName: \"kubernetes.io/projected/3d8e687e-f18e-4f36-aefc-59c644196614-kube-api-access-z5tb7\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.430128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8e687e-f18e-4f36-aefc-59c644196614-catalog-content\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.430230 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5tb7\" (UniqueName: \"kubernetes.io/projected/3d8e687e-f18e-4f36-aefc-59c644196614-kube-api-access-z5tb7\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.430393 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8e687e-f18e-4f36-aefc-59c644196614-utilities\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.430972 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8e687e-f18e-4f36-aefc-59c644196614-catalog-content\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.431109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8e687e-f18e-4f36-aefc-59c644196614-utilities\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.456169 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5tb7\" (UniqueName: \"kubernetes.io/projected/3d8e687e-f18e-4f36-aefc-59c644196614-kube-api-access-z5tb7\") pod \"community-operators-ttwnc\" (UID: \"3d8e687e-f18e-4f36-aefc-59c644196614\") " pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:34 crc kubenswrapper[4760]: I1125 08:44:34.644320 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:35 crc kubenswrapper[4760]: I1125 08:44:35.110472 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttwnc"] Nov 25 08:44:35 crc kubenswrapper[4760]: I1125 08:44:35.432631 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d8e687e-f18e-4f36-aefc-59c644196614" containerID="dc2c60868fe4d1dbaf42b95b673ea26084fb4a191ef5c7a0b92f969b01d385c0" exitCode=0 Nov 25 08:44:35 crc kubenswrapper[4760]: I1125 08:44:35.432720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttwnc" event={"ID":"3d8e687e-f18e-4f36-aefc-59c644196614","Type":"ContainerDied","Data":"dc2c60868fe4d1dbaf42b95b673ea26084fb4a191ef5c7a0b92f969b01d385c0"} Nov 25 08:44:35 crc kubenswrapper[4760]: I1125 08:44:35.432884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttwnc" event={"ID":"3d8e687e-f18e-4f36-aefc-59c644196614","Type":"ContainerStarted","Data":"8202f755b61ed6079f7ba57ab537fb9a2b9887349b8c5ce3a0dc9aaa523c2ce2"} Nov 25 08:44:35 crc kubenswrapper[4760]: I1125 08:44:35.448070 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:44:40 crc kubenswrapper[4760]: I1125 08:44:40.478398 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d8e687e-f18e-4f36-aefc-59c644196614" containerID="c3f5b7057ba51b12010567317428d34b4b71a287fa8e6ebdeabf258ccd8e532b" exitCode=0 Nov 25 08:44:40 crc kubenswrapper[4760]: I1125 08:44:40.478458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttwnc" event={"ID":"3d8e687e-f18e-4f36-aefc-59c644196614","Type":"ContainerDied","Data":"c3f5b7057ba51b12010567317428d34b4b71a287fa8e6ebdeabf258ccd8e532b"} Nov 25 08:44:41 crc kubenswrapper[4760]: I1125 08:44:41.488181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttwnc" event={"ID":"3d8e687e-f18e-4f36-aefc-59c644196614","Type":"ContainerStarted","Data":"1d9b50f73861cb063176ea9c2657c8906edbc4b0339519eb52207c6273109746"} Nov 25 08:44:41 crc kubenswrapper[4760]: I1125 08:44:41.507733 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttwnc" podStartSLOduration=2.092630722 podStartE2EDuration="7.507710633s" podCreationTimestamp="2025-11-25 08:44:34 +0000 UTC" firstStartedPulling="2025-11-25 08:44:35.447693703 +0000 UTC m=+2009.156724498" lastFinishedPulling="2025-11-25 08:44:40.862773614 +0000 UTC m=+2014.571804409" observedRunningTime="2025-11-25 08:44:41.502538905 +0000 UTC m=+2015.211569710" watchObservedRunningTime="2025-11-25 08:44:41.507710633 +0000 UTC m=+2015.216741438" Nov 25 08:44:44 crc kubenswrapper[4760]: I1125 08:44:44.644898 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:44 crc kubenswrapper[4760]: I1125 08:44:44.646156 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:44 crc kubenswrapper[4760]: I1125 08:44:44.692261 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.749055 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.756584 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.765169 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.773871 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sjn67"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.781308 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4cj5"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.787345 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.793285 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.798563 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.804587 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4wkz2"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.810441 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.816873 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tn9lg"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.822391 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-m4cj5"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.827544 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.832904 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tj6s9"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.838782 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-lqhxr"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.844909 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.850524 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dwv4n"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.855969 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mpp52"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.862460 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5tzsr"] Nov 25 08:44:45 crc kubenswrapper[4760]: I1125 08:44:45.868577 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8n77r"] Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.954478 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fa4b33-309d-45cd-be1f-4d8e313a1f60" path="/var/lib/kubelet/pods/05fa4b33-309d-45cd-be1f-4d8e313a1f60/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.955635 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09633aa4-95d9-4047-b7e0-e6c90f58845c" path="/var/lib/kubelet/pods/09633aa4-95d9-4047-b7e0-e6c90f58845c/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.956604 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c815e4-e305-41af-9739-5d60e5750c12" path="/var/lib/kubelet/pods/55c815e4-e305-41af-9739-5d60e5750c12/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.957867 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58fc7a0f-f6c7-4604-94f1-7af9fe6439de" path="/var/lib/kubelet/pods/58fc7a0f-f6c7-4604-94f1-7af9fe6439de/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.959422 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cc5f52f-76f6-430c-a302-b6b36fc84462" path="/var/lib/kubelet/pods/5cc5f52f-76f6-430c-a302-b6b36fc84462/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.960449 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b823ad-88b0-4ee6-a666-c19abc17b99a" path="/var/lib/kubelet/pods/a2b823ad-88b0-4ee6-a666-c19abc17b99a/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.962074 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f1315a-0771-4e60-995c-423c3b5e977a" path="/var/lib/kubelet/pods/e0f1315a-0771-4e60-995c-423c3b5e977a/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.963874 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee204c5b-9af1-49ba-9481-5bf11f90db8a" path="/var/lib/kubelet/pods/ee204c5b-9af1-49ba-9481-5bf11f90db8a/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.964627 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4" path="/var/lib/kubelet/pods/f1d95ebe-ae5d-4cba-9284-68c2caa3c7d4/volumes" Nov 25 08:44:46 crc kubenswrapper[4760]: I1125 08:44:46.965365 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce7adff-d9be-4eb1-a330-d365b2ba877b" path="/var/lib/kubelet/pods/fce7adff-d9be-4eb1-a330-d365b2ba877b/volumes" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.202848 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5"] Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.204598 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.206436 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.207618 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.208007 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.208049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.208385 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.232091 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5"] Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.271142 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.271264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.271310 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kqk\" (UniqueName: \"kubernetes.io/projected/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-kube-api-access-j6kqk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.271343 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.271417 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.372992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.373145 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.373229 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.373295 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kqk\" (UniqueName: \"kubernetes.io/projected/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-kube-api-access-j6kqk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.373330 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.380513 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.381973 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.383149 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.387794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.391452 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kqk\" (UniqueName: \"kubernetes.io/projected/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-kube-api-access-j6kqk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:52 crc kubenswrapper[4760]: I1125 08:44:52.535852 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:44:53 crc kubenswrapper[4760]: I1125 08:44:53.087034 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5"] Nov 25 08:44:53 crc kubenswrapper[4760]: W1125 08:44:53.087584 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5606daaf_d5b9_4ed2_a9aa_5e715141d4e4.slice/crio-fd0ef70cbf9451333b7c15944cdc6558198e28e4a470239aa2678a4adc454d30 WatchSource:0}: Error finding container fd0ef70cbf9451333b7c15944cdc6558198e28e4a470239aa2678a4adc454d30: Status 404 returned error can't find the container with id fd0ef70cbf9451333b7c15944cdc6558198e28e4a470239aa2678a4adc454d30 Nov 25 08:44:53 crc kubenswrapper[4760]: I1125 08:44:53.603494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" event={"ID":"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4","Type":"ContainerStarted","Data":"fd0ef70cbf9451333b7c15944cdc6558198e28e4a470239aa2678a4adc454d30"} Nov 25 08:44:54 crc kubenswrapper[4760]: I1125 08:44:54.615373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" event={"ID":"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4","Type":"ContainerStarted","Data":"e2e44f4802a203cea5951d180988cfcca5878a71a48bb9d64e9a201c8b6884a4"} Nov 25 08:44:54 crc kubenswrapper[4760]: I1125 08:44:54.632445 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" podStartSLOduration=2.007968952 podStartE2EDuration="2.632419474s" podCreationTimestamp="2025-11-25 08:44:52 +0000 UTC" firstStartedPulling="2025-11-25 08:44:53.090593052 +0000 UTC m=+2026.799623847" lastFinishedPulling="2025-11-25 08:44:53.715043564 +0000 UTC m=+2027.424074369" observedRunningTime="2025-11-25 08:44:54.629460669 +0000 UTC m=+2028.338491474" watchObservedRunningTime="2025-11-25 08:44:54.632419474 +0000 UTC m=+2028.341450269" Nov 25 08:44:54 crc kubenswrapper[4760]: I1125 08:44:54.695222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttwnc" Nov 25 08:44:54 crc kubenswrapper[4760]: I1125 08:44:54.768940 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttwnc"] Nov 25 08:44:54 crc kubenswrapper[4760]: I1125 08:44:54.813671 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mrhl"] Nov 25 08:44:54 crc kubenswrapper[4760]: I1125 08:44:54.813939 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7mrhl" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="registry-server" containerID="cri-o://65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85" gracePeriod=2 Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.279931 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.426738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-catalog-content\") pod \"02d0ec21-fa37-4499-8173-5821ec88a61f\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.426795 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-utilities\") pod \"02d0ec21-fa37-4499-8173-5821ec88a61f\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.426958 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr9tv\" (UniqueName: \"kubernetes.io/projected/02d0ec21-fa37-4499-8173-5821ec88a61f-kube-api-access-jr9tv\") pod \"02d0ec21-fa37-4499-8173-5821ec88a61f\" (UID: \"02d0ec21-fa37-4499-8173-5821ec88a61f\") " Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.427351 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-utilities" (OuterVolumeSpecName: "utilities") pod "02d0ec21-fa37-4499-8173-5821ec88a61f" (UID: "02d0ec21-fa37-4499-8173-5821ec88a61f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.427613 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.436164 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d0ec21-fa37-4499-8173-5821ec88a61f-kube-api-access-jr9tv" (OuterVolumeSpecName: "kube-api-access-jr9tv") pod "02d0ec21-fa37-4499-8173-5821ec88a61f" (UID: "02d0ec21-fa37-4499-8173-5821ec88a61f"). InnerVolumeSpecName "kube-api-access-jr9tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.477592 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02d0ec21-fa37-4499-8173-5821ec88a61f" (UID: "02d0ec21-fa37-4499-8173-5821ec88a61f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.556386 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d0ec21-fa37-4499-8173-5821ec88a61f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.556498 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr9tv\" (UniqueName: \"kubernetes.io/projected/02d0ec21-fa37-4499-8173-5821ec88a61f-kube-api-access-jr9tv\") on node \"crc\" DevicePath \"\"" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.636998 4760 generic.go:334] "Generic (PLEG): container finished" podID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerID="65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85" exitCode=0 Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.637120 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mrhl" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.637156 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mrhl" event={"ID":"02d0ec21-fa37-4499-8173-5821ec88a61f","Type":"ContainerDied","Data":"65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85"} Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.637279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mrhl" event={"ID":"02d0ec21-fa37-4499-8173-5821ec88a61f","Type":"ContainerDied","Data":"85134449014b1357c687ab506a5416a5804e50ab7d0293b205fe77529c1b3730"} Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.637316 4760 scope.go:117] "RemoveContainer" containerID="65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.682929 4760 scope.go:117] "RemoveContainer" containerID="83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.688649 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mrhl"] Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.695638 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7mrhl"] Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.712279 4760 scope.go:117] "RemoveContainer" containerID="55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.762134 4760 scope.go:117] "RemoveContainer" containerID="65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85" Nov 25 08:44:55 crc kubenswrapper[4760]: E1125 08:44:55.762679 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85\": container with ID starting with 65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85 not found: ID does not exist" containerID="65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.762732 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85"} err="failed to get container status \"65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85\": rpc error: code = NotFound desc = could not find container \"65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85\": container with ID starting with 65c79ab4fc6636409f25860c35b802c1aee0b629d7b63801536b3ffb6270fa85 not found: ID does not exist" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.762765 4760 scope.go:117] "RemoveContainer" containerID="83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b" Nov 25 08:44:55 crc kubenswrapper[4760]: E1125 08:44:55.763077 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b\": container with ID starting with 83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b not found: ID does not exist" containerID="83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.763110 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b"} err="failed to get container status \"83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b\": rpc error: code = NotFound desc = could not find container \"83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b\": container with ID starting with 83a15c8f3958c080722d849a1968a1c9fa2299ff0debf015518d45cfa1728d8b not found: ID does not exist" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.763129 4760 scope.go:117] "RemoveContainer" containerID="55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4" Nov 25 08:44:55 crc kubenswrapper[4760]: E1125 08:44:55.763610 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4\": container with ID starting with 55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4 not found: ID does not exist" containerID="55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4" Nov 25 08:44:55 crc kubenswrapper[4760]: I1125 08:44:55.763636 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4"} err="failed to get container status \"55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4\": rpc error: code = NotFound desc = could not find container \"55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4\": container with ID starting with 55623d6b900f05d1d7316d1eab521a38c6c3ad97ae5cfe87dee4a60a5ef947c4 not found: ID does not exist" Nov 25 08:44:56 crc kubenswrapper[4760]: I1125 08:44:56.952786 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" path="/var/lib/kubelet/pods/02d0ec21-fa37-4499-8173-5821ec88a61f/volumes" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.184919 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw"] Nov 25 08:45:00 crc kubenswrapper[4760]: E1125 08:45:00.185688 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.185706 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[4760]: E1125 08:45:00.185740 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.185750 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4760]: E1125 08:45:00.185765 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.185774 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.185995 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d0ec21-fa37-4499-8173-5821ec88a61f" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.186872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.189195 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.193793 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.200458 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw"] Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.346031 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8783b12f-890c-429f-9193-2c8e5d6ce684-config-volume\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.346417 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fskq\" (UniqueName: \"kubernetes.io/projected/8783b12f-890c-429f-9193-2c8e5d6ce684-kube-api-access-4fskq\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.346647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8783b12f-890c-429f-9193-2c8e5d6ce684-secret-volume\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.449441 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8783b12f-890c-429f-9193-2c8e5d6ce684-config-volume\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.450116 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fskq\" (UniqueName: \"kubernetes.io/projected/8783b12f-890c-429f-9193-2c8e5d6ce684-kube-api-access-4fskq\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.450431 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8783b12f-890c-429f-9193-2c8e5d6ce684-secret-volume\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.450907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8783b12f-890c-429f-9193-2c8e5d6ce684-config-volume\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.460035 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8783b12f-890c-429f-9193-2c8e5d6ce684-secret-volume\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.472468 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fskq\" (UniqueName: \"kubernetes.io/projected/8783b12f-890c-429f-9193-2c8e5d6ce684-kube-api-access-4fskq\") pod \"collect-profiles-29401005-msvbw\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.507795 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:00 crc kubenswrapper[4760]: I1125 08:45:00.964953 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw"] Nov 25 08:45:01 crc kubenswrapper[4760]: I1125 08:45:01.710039 4760 generic.go:334] "Generic (PLEG): container finished" podID="8783b12f-890c-429f-9193-2c8e5d6ce684" containerID="f3f34d4a4469b7c3809f78af104d70eeacb04996a30f0b5056ba2156768c2936" exitCode=0 Nov 25 08:45:01 crc kubenswrapper[4760]: I1125 08:45:01.710357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" event={"ID":"8783b12f-890c-429f-9193-2c8e5d6ce684","Type":"ContainerDied","Data":"f3f34d4a4469b7c3809f78af104d70eeacb04996a30f0b5056ba2156768c2936"} Nov 25 08:45:01 crc kubenswrapper[4760]: I1125 08:45:01.710385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" event={"ID":"8783b12f-890c-429f-9193-2c8e5d6ce684","Type":"ContainerStarted","Data":"61260195dac4796040bacb3c206c4ea4dd7f962dcfa50668b1afb764995b1422"} Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.041191 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.203363 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8783b12f-890c-429f-9193-2c8e5d6ce684-config-volume\") pod \"8783b12f-890c-429f-9193-2c8e5d6ce684\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.203596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fskq\" (UniqueName: \"kubernetes.io/projected/8783b12f-890c-429f-9193-2c8e5d6ce684-kube-api-access-4fskq\") pod \"8783b12f-890c-429f-9193-2c8e5d6ce684\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.203685 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8783b12f-890c-429f-9193-2c8e5d6ce684-secret-volume\") pod \"8783b12f-890c-429f-9193-2c8e5d6ce684\" (UID: \"8783b12f-890c-429f-9193-2c8e5d6ce684\") " Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.204380 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8783b12f-890c-429f-9193-2c8e5d6ce684-config-volume" (OuterVolumeSpecName: "config-volume") pod "8783b12f-890c-429f-9193-2c8e5d6ce684" (UID: "8783b12f-890c-429f-9193-2c8e5d6ce684"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.210416 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8783b12f-890c-429f-9193-2c8e5d6ce684-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8783b12f-890c-429f-9193-2c8e5d6ce684" (UID: "8783b12f-890c-429f-9193-2c8e5d6ce684"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.210530 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8783b12f-890c-429f-9193-2c8e5d6ce684-kube-api-access-4fskq" (OuterVolumeSpecName: "kube-api-access-4fskq") pod "8783b12f-890c-429f-9193-2c8e5d6ce684" (UID: "8783b12f-890c-429f-9193-2c8e5d6ce684"). InnerVolumeSpecName "kube-api-access-4fskq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.305737 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8783b12f-890c-429f-9193-2c8e5d6ce684-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.305790 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8783b12f-890c-429f-9193-2c8e5d6ce684-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.305802 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fskq\" (UniqueName: \"kubernetes.io/projected/8783b12f-890c-429f-9193-2c8e5d6ce684-kube-api-access-4fskq\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.727879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" event={"ID":"8783b12f-890c-429f-9193-2c8e5d6ce684","Type":"ContainerDied","Data":"61260195dac4796040bacb3c206c4ea4dd7f962dcfa50668b1afb764995b1422"} Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.728326 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61260195dac4796040bacb3c206c4ea4dd7f962dcfa50668b1afb764995b1422" Nov 25 08:45:03 crc kubenswrapper[4760]: I1125 08:45:03.728012 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw" Nov 25 08:45:04 crc kubenswrapper[4760]: I1125 08:45:04.117458 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp"] Nov 25 08:45:04 crc kubenswrapper[4760]: I1125 08:45:04.126178 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-sxgpp"] Nov 25 08:45:04 crc kubenswrapper[4760]: I1125 08:45:04.953204 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23229ef-e215-4e9f-a8e0-d38be72aef90" path="/var/lib/kubelet/pods/a23229ef-e215-4e9f-a8e0-d38be72aef90/volumes" Nov 25 08:45:06 crc kubenswrapper[4760]: I1125 08:45:06.754399 4760 generic.go:334] "Generic (PLEG): container finished" podID="5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" containerID="e2e44f4802a203cea5951d180988cfcca5878a71a48bb9d64e9a201c8b6884a4" exitCode=0 Nov 25 08:45:06 crc kubenswrapper[4760]: I1125 08:45:06.754451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" event={"ID":"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4","Type":"ContainerDied","Data":"e2e44f4802a203cea5951d180988cfcca5878a71a48bb9d64e9a201c8b6884a4"} Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.258503 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.419164 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-repo-setup-combined-ca-bundle\") pod \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.419247 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6kqk\" (UniqueName: \"kubernetes.io/projected/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-kube-api-access-j6kqk\") pod \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.419352 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ceph\") pod \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.419451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-inventory\") pod \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.419536 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ssh-key\") pod \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\" (UID: \"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4\") " Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.428453 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-kube-api-access-j6kqk" (OuterVolumeSpecName: "kube-api-access-j6kqk") pod "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" (UID: "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4"). InnerVolumeSpecName "kube-api-access-j6kqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.428860 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" (UID: "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.429462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ceph" (OuterVolumeSpecName: "ceph") pod "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" (UID: "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.449222 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" (UID: "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.449251 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-inventory" (OuterVolumeSpecName: "inventory") pod "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" (UID: "5606daaf-d5b9-4ed2-a9aa-5e715141d4e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.521773 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.521827 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.521837 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.521849 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.521858 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6kqk\" (UniqueName: \"kubernetes.io/projected/5606daaf-d5b9-4ed2-a9aa-5e715141d4e4-kube-api-access-j6kqk\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.772142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" event={"ID":"5606daaf-d5b9-4ed2-a9aa-5e715141d4e4","Type":"ContainerDied","Data":"fd0ef70cbf9451333b7c15944cdc6558198e28e4a470239aa2678a4adc454d30"} Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.772220 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0ef70cbf9451333b7c15944cdc6558198e28e4a470239aa2678a4adc454d30" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.772354 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.860846 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh"] Nov 25 08:45:08 crc kubenswrapper[4760]: E1125 08:45:08.861300 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8783b12f-890c-429f-9193-2c8e5d6ce684" containerName="collect-profiles" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.861352 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8783b12f-890c-429f-9193-2c8e5d6ce684" containerName="collect-profiles" Nov 25 08:45:08 crc kubenswrapper[4760]: E1125 08:45:08.861375 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.861388 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.861600 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8783b12f-890c-429f-9193-2c8e5d6ce684" containerName="collect-profiles" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.861626 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5606daaf-d5b9-4ed2-a9aa-5e715141d4e4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.862646 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.865238 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.867834 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.868359 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.868611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.868790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:45:08 crc kubenswrapper[4760]: I1125 08:45:08.870874 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh"] Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.034494 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.034561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.034610 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vh9w\" (UniqueName: \"kubernetes.io/projected/e324f737-7225-41ec-b3c5-6cc0c2931377-kube-api-access-4vh9w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.034725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.034771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.136322 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.136405 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.136482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vh9w\" (UniqueName: \"kubernetes.io/projected/e324f737-7225-41ec-b3c5-6cc0c2931377-kube-api-access-4vh9w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.136957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.137009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.141041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.141221 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.141447 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.141513 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.157158 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vh9w\" (UniqueName: \"kubernetes.io/projected/e324f737-7225-41ec-b3c5-6cc0c2931377-kube-api-access-4vh9w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.195911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.758699 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh"] Nov 25 08:45:09 crc kubenswrapper[4760]: W1125 08:45:09.762670 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode324f737_7225_41ec_b3c5_6cc0c2931377.slice/crio-66631f10c16004c9e80f62eb038fc6c9f7c38184d6c034d3bce2163f3c2bf558 WatchSource:0}: Error finding container 66631f10c16004c9e80f62eb038fc6c9f7c38184d6c034d3bce2163f3c2bf558: Status 404 returned error can't find the container with id 66631f10c16004c9e80f62eb038fc6c9f7c38184d6c034d3bce2163f3c2bf558 Nov 25 08:45:09 crc kubenswrapper[4760]: I1125 08:45:09.782782 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" event={"ID":"e324f737-7225-41ec-b3c5-6cc0c2931377","Type":"ContainerStarted","Data":"66631f10c16004c9e80f62eb038fc6c9f7c38184d6c034d3bce2163f3c2bf558"} Nov 25 08:45:11 crc kubenswrapper[4760]: I1125 08:45:11.809763 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" event={"ID":"e324f737-7225-41ec-b3c5-6cc0c2931377","Type":"ContainerStarted","Data":"8d123ab46fceb825e3e1672d7417c1925b990aa497165249d28f6f1596775fe9"} Nov 25 08:45:11 crc kubenswrapper[4760]: I1125 08:45:11.832297 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" podStartSLOduration=3.047873929 podStartE2EDuration="3.832280106s" podCreationTimestamp="2025-11-25 08:45:08 +0000 UTC" firstStartedPulling="2025-11-25 08:45:09.764913648 +0000 UTC m=+2043.473944443" lastFinishedPulling="2025-11-25 08:45:10.549319815 +0000 UTC m=+2044.258350620" observedRunningTime="2025-11-25 08:45:11.823842784 +0000 UTC m=+2045.532873589" watchObservedRunningTime="2025-11-25 08:45:11.832280106 +0000 UTC m=+2045.541310901" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.577548 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5btrg"] Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.579819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.592395 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5btrg"] Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.707459 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-catalog-content\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.707627 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-utilities\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.707674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2xs\" (UniqueName: \"kubernetes.io/projected/c2ef98eb-5ebe-4882-b73b-7004029edec0-kube-api-access-dv2xs\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.808775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2xs\" (UniqueName: \"kubernetes.io/projected/c2ef98eb-5ebe-4882-b73b-7004029edec0-kube-api-access-dv2xs\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.808878 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-catalog-content\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.808966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-utilities\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.809584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-utilities\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.809661 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-catalog-content\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.837218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2xs\" (UniqueName: \"kubernetes.io/projected/c2ef98eb-5ebe-4882-b73b-7004029edec0-kube-api-access-dv2xs\") pod \"redhat-operators-5btrg\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:12 crc kubenswrapper[4760]: I1125 08:45:12.900698 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:13 crc kubenswrapper[4760]: I1125 08:45:13.366810 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5btrg"] Nov 25 08:45:13 crc kubenswrapper[4760]: I1125 08:45:13.826178 4760 generic.go:334] "Generic (PLEG): container finished" podID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerID="15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a" exitCode=0 Nov 25 08:45:13 crc kubenswrapper[4760]: I1125 08:45:13.826348 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btrg" event={"ID":"c2ef98eb-5ebe-4882-b73b-7004029edec0","Type":"ContainerDied","Data":"15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a"} Nov 25 08:45:13 crc kubenswrapper[4760]: I1125 08:45:13.826501 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btrg" event={"ID":"c2ef98eb-5ebe-4882-b73b-7004029edec0","Type":"ContainerStarted","Data":"bc8b2c14c0d70ff0b3f6e62c79a2d0fc1fa19defce2accfd2214a314a35b952e"} Nov 25 08:45:15 crc kubenswrapper[4760]: I1125 08:45:15.861075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btrg" event={"ID":"c2ef98eb-5ebe-4882-b73b-7004029edec0","Type":"ContainerStarted","Data":"346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9"} Nov 25 08:45:17 crc kubenswrapper[4760]: I1125 08:45:17.877906 4760 generic.go:334] "Generic (PLEG): container finished" podID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerID="346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9" exitCode=0 Nov 25 08:45:17 crc kubenswrapper[4760]: I1125 08:45:17.877989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btrg" event={"ID":"c2ef98eb-5ebe-4882-b73b-7004029edec0","Type":"ContainerDied","Data":"346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9"} Nov 25 08:45:21 crc kubenswrapper[4760]: I1125 08:45:21.916081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btrg" event={"ID":"c2ef98eb-5ebe-4882-b73b-7004029edec0","Type":"ContainerStarted","Data":"a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49"} Nov 25 08:45:21 crc kubenswrapper[4760]: I1125 08:45:21.934867 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5btrg" podStartSLOduration=3.075650418 podStartE2EDuration="9.934843148s" podCreationTimestamp="2025-11-25 08:45:12 +0000 UTC" firstStartedPulling="2025-11-25 08:45:13.828014751 +0000 UTC m=+2047.537045546" lastFinishedPulling="2025-11-25 08:45:20.687207441 +0000 UTC m=+2054.396238276" observedRunningTime="2025-11-25 08:45:21.931013808 +0000 UTC m=+2055.640044613" watchObservedRunningTime="2025-11-25 08:45:21.934843148 +0000 UTC m=+2055.643873943" Nov 25 08:45:22 crc kubenswrapper[4760]: I1125 08:45:22.901690 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:22 crc kubenswrapper[4760]: I1125 08:45:22.901902 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:23 crc kubenswrapper[4760]: I1125 08:45:23.953170 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5btrg" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:45:23 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 08:45:23 crc kubenswrapper[4760]: > Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.208750 4760 scope.go:117] "RemoveContainer" containerID="839f652e3ca5e9821607b0b50bab056b4f3e327207570d48def85fff909b4c08" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.251308 4760 scope.go:117] "RemoveContainer" containerID="f68fded35ba768785625cca84252cc4ec071b66b09c388860c5620b973bf2eda" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.292982 4760 scope.go:117] "RemoveContainer" containerID="f6f64dee971c84aa69ef6091b2e6b8307b0ec963ce6e0ec5d9ecae41da4b60f2" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.359128 4760 scope.go:117] "RemoveContainer" containerID="6c16aa8f71d4e7b7aa37c569cf465ddc6357ae10ed3e5290a26fb5f73bdc5226" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.407973 4760 scope.go:117] "RemoveContainer" containerID="cb5bfe930bb1aa2423cd184286df88746db2dd94ce1c2459557cc5f905dadda9" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.432260 4760 scope.go:117] "RemoveContainer" containerID="f232b828eb2d62e89694e9349d89ffbb63d2a688639461757ce281ea370a3b96" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.512885 4760 scope.go:117] "RemoveContainer" containerID="849bf1b9368102117ef5e6f33c74681d2d35a8e411e8aed23ec46c15b1094ad0" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.539316 4760 scope.go:117] "RemoveContainer" containerID="c4a6bea9beecf6b24e70659bdfd5928ff75ee43bf9666b3bd076b1f7dbbce5fd" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.563432 4760 scope.go:117] "RemoveContainer" containerID="d2cd33877df8d74066d5b22351319bd21ec434011979859eb977b6971c5ff3c0" Nov 25 08:45:32 crc kubenswrapper[4760]: I1125 08:45:32.949661 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:33 crc kubenswrapper[4760]: I1125 08:45:33.004804 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:33 crc kubenswrapper[4760]: I1125 08:45:33.183806 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5btrg"] Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.037639 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5btrg" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="registry-server" containerID="cri-o://a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49" gracePeriod=2 Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.535186 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.613126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv2xs\" (UniqueName: \"kubernetes.io/projected/c2ef98eb-5ebe-4882-b73b-7004029edec0-kube-api-access-dv2xs\") pod \"c2ef98eb-5ebe-4882-b73b-7004029edec0\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.613224 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-catalog-content\") pod \"c2ef98eb-5ebe-4882-b73b-7004029edec0\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.613311 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-utilities\") pod \"c2ef98eb-5ebe-4882-b73b-7004029edec0\" (UID: \"c2ef98eb-5ebe-4882-b73b-7004029edec0\") " Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.614326 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-utilities" (OuterVolumeSpecName: "utilities") pod "c2ef98eb-5ebe-4882-b73b-7004029edec0" (UID: "c2ef98eb-5ebe-4882-b73b-7004029edec0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.619306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ef98eb-5ebe-4882-b73b-7004029edec0-kube-api-access-dv2xs" (OuterVolumeSpecName: "kube-api-access-dv2xs") pod "c2ef98eb-5ebe-4882-b73b-7004029edec0" (UID: "c2ef98eb-5ebe-4882-b73b-7004029edec0"). InnerVolumeSpecName "kube-api-access-dv2xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.707649 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2ef98eb-5ebe-4882-b73b-7004029edec0" (UID: "c2ef98eb-5ebe-4882-b73b-7004029edec0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.715409 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv2xs\" (UniqueName: \"kubernetes.io/projected/c2ef98eb-5ebe-4882-b73b-7004029edec0-kube-api-access-dv2xs\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.715453 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:34 crc kubenswrapper[4760]: I1125 08:45:34.715466 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef98eb-5ebe-4882-b73b-7004029edec0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.048373 4760 generic.go:334] "Generic (PLEG): container finished" podID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerID="a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49" exitCode=0 Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.048437 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btrg" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.048457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btrg" event={"ID":"c2ef98eb-5ebe-4882-b73b-7004029edec0","Type":"ContainerDied","Data":"a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49"} Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.048824 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btrg" event={"ID":"c2ef98eb-5ebe-4882-b73b-7004029edec0","Type":"ContainerDied","Data":"bc8b2c14c0d70ff0b3f6e62c79a2d0fc1fa19defce2accfd2214a314a35b952e"} Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.048844 4760 scope.go:117] "RemoveContainer" containerID="a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.068855 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5btrg"] Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.072009 4760 scope.go:117] "RemoveContainer" containerID="346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.077591 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5btrg"] Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.088922 4760 scope.go:117] "RemoveContainer" containerID="15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.148950 4760 scope.go:117] "RemoveContainer" containerID="a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49" Nov 25 08:45:35 crc kubenswrapper[4760]: E1125 08:45:35.149694 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49\": container with ID starting with a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49 not found: ID does not exist" containerID="a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.149811 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49"} err="failed to get container status \"a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49\": rpc error: code = NotFound desc = could not find container \"a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49\": container with ID starting with a4a03fe3b93e1341c9761ac742482106c00d1d5e48d0510973f16e0a2778de49 not found: ID does not exist" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.149904 4760 scope.go:117] "RemoveContainer" containerID="346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9" Nov 25 08:45:35 crc kubenswrapper[4760]: E1125 08:45:35.150384 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9\": container with ID starting with 346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9 not found: ID does not exist" containerID="346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.150474 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9"} err="failed to get container status \"346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9\": rpc error: code = NotFound desc = could not find container \"346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9\": container with ID starting with 346ed15f674019573c05d14a32c800ba552f9ab419bce112e608f9922e3679a9 not found: ID does not exist" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.150559 4760 scope.go:117] "RemoveContainer" containerID="15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a" Nov 25 08:45:35 crc kubenswrapper[4760]: E1125 08:45:35.151012 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a\": container with ID starting with 15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a not found: ID does not exist" containerID="15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a" Nov 25 08:45:35 crc kubenswrapper[4760]: I1125 08:45:35.151050 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a"} err="failed to get container status \"15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a\": rpc error: code = NotFound desc = could not find container \"15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a\": container with ID starting with 15bca15127e3a482cd820bdc814a49cd2344a08f5b7683aa40a31f4e9bf3fe5a not found: ID does not exist" Nov 25 08:45:36 crc kubenswrapper[4760]: I1125 08:45:36.950942 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" path="/var/lib/kubelet/pods/c2ef98eb-5ebe-4882-b73b-7004029edec0/volumes" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.161599 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f2f9f"] Nov 25 08:45:50 crc kubenswrapper[4760]: E1125 08:45:50.193540 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="extract-utilities" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.197694 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="extract-utilities" Nov 25 08:45:50 crc kubenswrapper[4760]: E1125 08:45:50.197872 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="extract-content" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.197940 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="extract-content" Nov 25 08:45:50 crc kubenswrapper[4760]: E1125 08:45:50.198028 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="registry-server" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.198085 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="registry-server" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.198760 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ef98eb-5ebe-4882-b73b-7004029edec0" containerName="registry-server" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.206664 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.208819 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f2f9f"] Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.398920 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-utilities\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.399277 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-catalog-content\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.399492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65f8k\" (UniqueName: \"kubernetes.io/projected/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-kube-api-access-65f8k\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.501564 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-catalog-content\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.501726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65f8k\" (UniqueName: \"kubernetes.io/projected/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-kube-api-access-65f8k\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.501798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-utilities\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.502401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-utilities\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.502562 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-catalog-content\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.535357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65f8k\" (UniqueName: \"kubernetes.io/projected/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-kube-api-access-65f8k\") pod \"certified-operators-f2f9f\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:50 crc kubenswrapper[4760]: I1125 08:45:50.541813 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:45:51 crc kubenswrapper[4760]: I1125 08:45:51.072490 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f2f9f"] Nov 25 08:45:51 crc kubenswrapper[4760]: I1125 08:45:51.230999 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2f9f" event={"ID":"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314","Type":"ContainerStarted","Data":"f8b0f3aa50fe479957997e827deb770b26150c978ab21803157c97c4b7204d69"} Nov 25 08:45:52 crc kubenswrapper[4760]: I1125 08:45:52.239818 4760 generic.go:334] "Generic (PLEG): container finished" podID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerID="1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1" exitCode=0 Nov 25 08:45:52 crc kubenswrapper[4760]: I1125 08:45:52.239884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2f9f" event={"ID":"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314","Type":"ContainerDied","Data":"1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1"} Nov 25 08:45:54 crc kubenswrapper[4760]: I1125 08:45:54.261713 4760 generic.go:334] "Generic (PLEG): container finished" podID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerID="47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352" exitCode=0 Nov 25 08:45:54 crc kubenswrapper[4760]: I1125 08:45:54.261824 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2f9f" event={"ID":"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314","Type":"ContainerDied","Data":"47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352"} Nov 25 08:45:55 crc kubenswrapper[4760]: I1125 08:45:55.276000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2f9f" event={"ID":"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314","Type":"ContainerStarted","Data":"b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52"} Nov 25 08:45:55 crc kubenswrapper[4760]: I1125 08:45:55.296839 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f2f9f" podStartSLOduration=2.67873366 podStartE2EDuration="5.296822215s" podCreationTimestamp="2025-11-25 08:45:50 +0000 UTC" firstStartedPulling="2025-11-25 08:45:52.242293197 +0000 UTC m=+2085.951324012" lastFinishedPulling="2025-11-25 08:45:54.860381772 +0000 UTC m=+2088.569412567" observedRunningTime="2025-11-25 08:45:55.292961874 +0000 UTC m=+2089.001992669" watchObservedRunningTime="2025-11-25 08:45:55.296822215 +0000 UTC m=+2089.005853010" Nov 25 08:46:00 crc kubenswrapper[4760]: I1125 08:46:00.543126 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:46:00 crc kubenswrapper[4760]: I1125 08:46:00.543523 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:46:00 crc kubenswrapper[4760]: I1125 08:46:00.588624 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:46:01 crc kubenswrapper[4760]: I1125 08:46:01.382370 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:46:01 crc kubenswrapper[4760]: I1125 08:46:01.428303 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f2f9f"] Nov 25 08:46:03 crc kubenswrapper[4760]: I1125 08:46:03.352445 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f2f9f" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="registry-server" containerID="cri-o://b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52" gracePeriod=2 Nov 25 08:46:03 crc kubenswrapper[4760]: I1125 08:46:03.785052 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:46:03 crc kubenswrapper[4760]: I1125 08:46:03.968236 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-utilities\") pod \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " Nov 25 08:46:03 crc kubenswrapper[4760]: I1125 08:46:03.968410 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65f8k\" (UniqueName: \"kubernetes.io/projected/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-kube-api-access-65f8k\") pod \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " Nov 25 08:46:03 crc kubenswrapper[4760]: I1125 08:46:03.968574 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-catalog-content\") pod \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\" (UID: \"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314\") " Nov 25 08:46:03 crc kubenswrapper[4760]: I1125 08:46:03.973036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-utilities" (OuterVolumeSpecName: "utilities") pod "d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" (UID: "d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:46:03 crc kubenswrapper[4760]: I1125 08:46:03.992595 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-kube-api-access-65f8k" (OuterVolumeSpecName: "kube-api-access-65f8k") pod "d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" (UID: "d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314"). InnerVolumeSpecName "kube-api-access-65f8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.025832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" (UID: "d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.071912 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.071949 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65f8k\" (UniqueName: \"kubernetes.io/projected/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-kube-api-access-65f8k\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.071965 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.364936 4760 generic.go:334] "Generic (PLEG): container finished" podID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerID="b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52" exitCode=0 Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.364981 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2f9f" event={"ID":"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314","Type":"ContainerDied","Data":"b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52"} Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.364995 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2f9f" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.365003 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2f9f" event={"ID":"d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314","Type":"ContainerDied","Data":"f8b0f3aa50fe479957997e827deb770b26150c978ab21803157c97c4b7204d69"} Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.365020 4760 scope.go:117] "RemoveContainer" containerID="b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.400483 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f2f9f"] Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.403196 4760 scope.go:117] "RemoveContainer" containerID="47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.411161 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f2f9f"] Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.429509 4760 scope.go:117] "RemoveContainer" containerID="1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.468978 4760 scope.go:117] "RemoveContainer" containerID="b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52" Nov 25 08:46:04 crc kubenswrapper[4760]: E1125 08:46:04.469692 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52\": container with ID starting with b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52 not found: ID does not exist" containerID="b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.469761 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52"} err="failed to get container status \"b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52\": rpc error: code = NotFound desc = could not find container \"b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52\": container with ID starting with b20dd508c585c974f6134edff81dee1f7ee530c26386d01686ea38cb68d2eb52 not found: ID does not exist" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.469794 4760 scope.go:117] "RemoveContainer" containerID="47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352" Nov 25 08:46:04 crc kubenswrapper[4760]: E1125 08:46:04.470263 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352\": container with ID starting with 47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352 not found: ID does not exist" containerID="47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.470305 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352"} err="failed to get container status \"47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352\": rpc error: code = NotFound desc = could not find container \"47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352\": container with ID starting with 47c769018a5057d25d0e52628998b0085fe2619d7533b9fa916227ff050ff352 not found: ID does not exist" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.470330 4760 scope.go:117] "RemoveContainer" containerID="1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1" Nov 25 08:46:04 crc kubenswrapper[4760]: E1125 08:46:04.470616 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1\": container with ID starting with 1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1 not found: ID does not exist" containerID="1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.470669 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1"} err="failed to get container status \"1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1\": rpc error: code = NotFound desc = could not find container \"1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1\": container with ID starting with 1c8bb841ad964b97ef0ea27b78a8c1c64aeb619de4435dcf4f0dac6c5821a2a1 not found: ID does not exist" Nov 25 08:46:04 crc kubenswrapper[4760]: I1125 08:46:04.949674 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" path="/var/lib/kubelet/pods/d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314/volumes" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.598137 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j87ll"] Nov 25 08:46:16 crc kubenswrapper[4760]: E1125 08:46:16.598987 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="extract-content" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.599000 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="extract-content" Nov 25 08:46:16 crc kubenswrapper[4760]: E1125 08:46:16.599011 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="extract-utilities" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.599018 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="extract-utilities" Nov 25 08:46:16 crc kubenswrapper[4760]: E1125 08:46:16.599050 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="registry-server" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.599056 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="registry-server" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.599214 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18b9cdd-2f4c-4f9a-a7cf-775ec1d1e314" containerName="registry-server" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.600665 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.615211 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87ll"] Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.712321 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-catalog-content\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.713048 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqw7\" (UniqueName: \"kubernetes.io/projected/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-kube-api-access-qjqw7\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.713197 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-utilities\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.815612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-catalog-content\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.815740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjqw7\" (UniqueName: \"kubernetes.io/projected/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-kube-api-access-qjqw7\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.815762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-utilities\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.816073 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-catalog-content\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.816162 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-utilities\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.835378 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjqw7\" (UniqueName: \"kubernetes.io/projected/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-kube-api-access-qjqw7\") pod \"redhat-marketplace-j87ll\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:16 crc kubenswrapper[4760]: I1125 08:46:16.953702 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:17 crc kubenswrapper[4760]: I1125 08:46:17.428839 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87ll"] Nov 25 08:46:17 crc kubenswrapper[4760]: I1125 08:46:17.463758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87ll" event={"ID":"00f57d94-f8d8-4e8d-b73a-fdb0189faea3","Type":"ContainerStarted","Data":"47553bacfafe5c928b878bfe8bd88d83d0a8e66b185e4744a9915c8ae7302da1"} Nov 25 08:46:18 crc kubenswrapper[4760]: I1125 08:46:18.471504 4760 generic.go:334] "Generic (PLEG): container finished" podID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerID="280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2" exitCode=0 Nov 25 08:46:18 crc kubenswrapper[4760]: I1125 08:46:18.471605 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87ll" event={"ID":"00f57d94-f8d8-4e8d-b73a-fdb0189faea3","Type":"ContainerDied","Data":"280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2"} Nov 25 08:46:19 crc kubenswrapper[4760]: I1125 08:46:19.485478 4760 generic.go:334] "Generic (PLEG): container finished" podID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerID="c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57" exitCode=0 Nov 25 08:46:19 crc kubenswrapper[4760]: I1125 08:46:19.485699 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87ll" event={"ID":"00f57d94-f8d8-4e8d-b73a-fdb0189faea3","Type":"ContainerDied","Data":"c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57"} Nov 25 08:46:20 crc kubenswrapper[4760]: I1125 08:46:20.497318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87ll" event={"ID":"00f57d94-f8d8-4e8d-b73a-fdb0189faea3","Type":"ContainerStarted","Data":"5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f"} Nov 25 08:46:20 crc kubenswrapper[4760]: I1125 08:46:20.565614 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j87ll" podStartSLOduration=2.9302882820000002 podStartE2EDuration="4.565593933s" podCreationTimestamp="2025-11-25 08:46:16 +0000 UTC" firstStartedPulling="2025-11-25 08:46:18.473565828 +0000 UTC m=+2112.182596633" lastFinishedPulling="2025-11-25 08:46:20.108871489 +0000 UTC m=+2113.817902284" observedRunningTime="2025-11-25 08:46:20.559586601 +0000 UTC m=+2114.268617406" watchObservedRunningTime="2025-11-25 08:46:20.565593933 +0000 UTC m=+2114.274624728" Nov 25 08:46:26 crc kubenswrapper[4760]: I1125 08:46:26.954471 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:26 crc kubenswrapper[4760]: I1125 08:46:26.954943 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:27 crc kubenswrapper[4760]: I1125 08:46:27.043680 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:27 crc kubenswrapper[4760]: I1125 08:46:27.604331 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:27 crc kubenswrapper[4760]: I1125 08:46:27.645241 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87ll"] Nov 25 08:46:29 crc kubenswrapper[4760]: I1125 08:46:29.571534 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j87ll" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="registry-server" containerID="cri-o://5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f" gracePeriod=2 Nov 25 08:46:29 crc kubenswrapper[4760]: I1125 08:46:29.975139 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.055692 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjqw7\" (UniqueName: \"kubernetes.io/projected/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-kube-api-access-qjqw7\") pod \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.056042 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-catalog-content\") pod \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.056571 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-utilities\") pod \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\" (UID: \"00f57d94-f8d8-4e8d-b73a-fdb0189faea3\") " Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.057421 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-utilities" (OuterVolumeSpecName: "utilities") pod "00f57d94-f8d8-4e8d-b73a-fdb0189faea3" (UID: "00f57d94-f8d8-4e8d-b73a-fdb0189faea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.058406 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.062231 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-kube-api-access-qjqw7" (OuterVolumeSpecName: "kube-api-access-qjqw7") pod "00f57d94-f8d8-4e8d-b73a-fdb0189faea3" (UID: "00f57d94-f8d8-4e8d-b73a-fdb0189faea3"). InnerVolumeSpecName "kube-api-access-qjqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.072270 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00f57d94-f8d8-4e8d-b73a-fdb0189faea3" (UID: "00f57d94-f8d8-4e8d-b73a-fdb0189faea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.160076 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjqw7\" (UniqueName: \"kubernetes.io/projected/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-kube-api-access-qjqw7\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.160110 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f57d94-f8d8-4e8d-b73a-fdb0189faea3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.581169 4760 generic.go:334] "Generic (PLEG): container finished" podID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerID="5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f" exitCode=0 Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.581256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87ll" event={"ID":"00f57d94-f8d8-4e8d-b73a-fdb0189faea3","Type":"ContainerDied","Data":"5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f"} Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.581285 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87ll" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.581504 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87ll" event={"ID":"00f57d94-f8d8-4e8d-b73a-fdb0189faea3","Type":"ContainerDied","Data":"47553bacfafe5c928b878bfe8bd88d83d0a8e66b185e4744a9915c8ae7302da1"} Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.581526 4760 scope.go:117] "RemoveContainer" containerID="5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.609218 4760 scope.go:117] "RemoveContainer" containerID="c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.618326 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87ll"] Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.623829 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87ll"] Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.635044 4760 scope.go:117] "RemoveContainer" containerID="280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.681646 4760 scope.go:117] "RemoveContainer" containerID="5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f" Nov 25 08:46:30 crc kubenswrapper[4760]: E1125 08:46:30.682684 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f\": container with ID starting with 5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f not found: ID does not exist" containerID="5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.682720 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f"} err="failed to get container status \"5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f\": rpc error: code = NotFound desc = could not find container \"5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f\": container with ID starting with 5e9f0cf156d12a1ff5c615db328c700fb78d1ab7708afa80717d4b840376e93f not found: ID does not exist" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.682747 4760 scope.go:117] "RemoveContainer" containerID="c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57" Nov 25 08:46:30 crc kubenswrapper[4760]: E1125 08:46:30.683102 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57\": container with ID starting with c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57 not found: ID does not exist" containerID="c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.683152 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57"} err="failed to get container status \"c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57\": rpc error: code = NotFound desc = could not find container \"c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57\": container with ID starting with c5e95257d93816faae61569c66303ef9c1ca474f5ee99e17a061883e41b43e57 not found: ID does not exist" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.683184 4760 scope.go:117] "RemoveContainer" containerID="280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2" Nov 25 08:46:30 crc kubenswrapper[4760]: E1125 08:46:30.686087 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2\": container with ID starting with 280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2 not found: ID does not exist" containerID="280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.686134 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2"} err="failed to get container status \"280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2\": rpc error: code = NotFound desc = could not find container \"280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2\": container with ID starting with 280dbf94a715bd2411be4a7632bb06c2b1308086108ee1abfdc4177de182adb2 not found: ID does not exist" Nov 25 08:46:30 crc kubenswrapper[4760]: I1125 08:46:30.948439 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" path="/var/lib/kubelet/pods/00f57d94-f8d8-4e8d-b73a-fdb0189faea3/volumes" Nov 25 08:46:31 crc kubenswrapper[4760]: I1125 08:46:31.746201 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:46:31 crc kubenswrapper[4760]: I1125 08:46:31.746301 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:46:32 crc kubenswrapper[4760]: I1125 08:46:32.809752 4760 scope.go:117] "RemoveContainer" containerID="377c81043b3f2a193d6f5a122cb265765a5950bdc68041d927959a8259bafc57" Nov 25 08:46:32 crc kubenswrapper[4760]: I1125 08:46:32.854972 4760 scope.go:117] "RemoveContainer" containerID="27ef4b57f7b4a81bec11827a9c42b0ae75d40d6858072b66bcb3b6f3280efdd3" Nov 25 08:46:51 crc kubenswrapper[4760]: I1125 08:46:51.754987 4760 generic.go:334] "Generic (PLEG): container finished" podID="e324f737-7225-41ec-b3c5-6cc0c2931377" containerID="8d123ab46fceb825e3e1672d7417c1925b990aa497165249d28f6f1596775fe9" exitCode=0 Nov 25 08:46:51 crc kubenswrapper[4760]: I1125 08:46:51.755072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" event={"ID":"e324f737-7225-41ec-b3c5-6cc0c2931377","Type":"ContainerDied","Data":"8d123ab46fceb825e3e1672d7417c1925b990aa497165249d28f6f1596775fe9"} Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.177930 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.309805 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vh9w\" (UniqueName: \"kubernetes.io/projected/e324f737-7225-41ec-b3c5-6cc0c2931377-kube-api-access-4vh9w\") pod \"e324f737-7225-41ec-b3c5-6cc0c2931377\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.309896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-inventory\") pod \"e324f737-7225-41ec-b3c5-6cc0c2931377\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.309975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ceph\") pod \"e324f737-7225-41ec-b3c5-6cc0c2931377\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.310000 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ssh-key\") pod \"e324f737-7225-41ec-b3c5-6cc0c2931377\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.310041 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-bootstrap-combined-ca-bundle\") pod \"e324f737-7225-41ec-b3c5-6cc0c2931377\" (UID: \"e324f737-7225-41ec-b3c5-6cc0c2931377\") " Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.315193 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e324f737-7225-41ec-b3c5-6cc0c2931377" (UID: "e324f737-7225-41ec-b3c5-6cc0c2931377"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.315211 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e324f737-7225-41ec-b3c5-6cc0c2931377-kube-api-access-4vh9w" (OuterVolumeSpecName: "kube-api-access-4vh9w") pod "e324f737-7225-41ec-b3c5-6cc0c2931377" (UID: "e324f737-7225-41ec-b3c5-6cc0c2931377"). InnerVolumeSpecName "kube-api-access-4vh9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.317096 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ceph" (OuterVolumeSpecName: "ceph") pod "e324f737-7225-41ec-b3c5-6cc0c2931377" (UID: "e324f737-7225-41ec-b3c5-6cc0c2931377"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.339602 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-inventory" (OuterVolumeSpecName: "inventory") pod "e324f737-7225-41ec-b3c5-6cc0c2931377" (UID: "e324f737-7225-41ec-b3c5-6cc0c2931377"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.348081 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e324f737-7225-41ec-b3c5-6cc0c2931377" (UID: "e324f737-7225-41ec-b3c5-6cc0c2931377"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.412699 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vh9w\" (UniqueName: \"kubernetes.io/projected/e324f737-7225-41ec-b3c5-6cc0c2931377-kube-api-access-4vh9w\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.412735 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.412745 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.412755 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.412765 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e324f737-7225-41ec-b3c5-6cc0c2931377-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.775672 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" event={"ID":"e324f737-7225-41ec-b3c5-6cc0c2931377","Type":"ContainerDied","Data":"66631f10c16004c9e80f62eb038fc6c9f7c38184d6c034d3bce2163f3c2bf558"} Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.775714 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66631f10c16004c9e80f62eb038fc6c9f7c38184d6c034d3bce2163f3c2bf558" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.775787 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.862435 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd"] Nov 25 08:46:53 crc kubenswrapper[4760]: E1125 08:46:53.862827 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="registry-server" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.862855 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="registry-server" Nov 25 08:46:53 crc kubenswrapper[4760]: E1125 08:46:53.862871 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e324f737-7225-41ec-b3c5-6cc0c2931377" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.862879 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e324f737-7225-41ec-b3c5-6cc0c2931377" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 08:46:53 crc kubenswrapper[4760]: E1125 08:46:53.862896 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="extract-utilities" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.862903 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="extract-utilities" Nov 25 08:46:53 crc kubenswrapper[4760]: E1125 08:46:53.862923 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="extract-content" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.862931 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="extract-content" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.863157 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e324f737-7225-41ec-b3c5-6cc0c2931377" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.863179 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f57d94-f8d8-4e8d-b73a-fdb0189faea3" containerName="registry-server" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.863913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.866824 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.868621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.869157 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.872199 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.873266 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:46:53 crc kubenswrapper[4760]: I1125 08:46:53.875902 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd"] Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.022413 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.022517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.022567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d49\" (UniqueName: \"kubernetes.io/projected/ed298743-8f13-44a6-bbff-1b5702a1a0f5-kube-api-access-b5d49\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.022638 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.124397 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.124478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d49\" (UniqueName: \"kubernetes.io/projected/ed298743-8f13-44a6-bbff-1b5702a1a0f5-kube-api-access-b5d49\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.124557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.124621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.138147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.138153 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.138278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.143719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d49\" (UniqueName: \"kubernetes.io/projected/ed298743-8f13-44a6-bbff-1b5702a1a0f5-kube-api-access-b5d49\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.228408 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.732951 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd"] Nov 25 08:46:54 crc kubenswrapper[4760]: I1125 08:46:54.783639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" event={"ID":"ed298743-8f13-44a6-bbff-1b5702a1a0f5","Type":"ContainerStarted","Data":"351934e567e680046f35125754b44ceb5c859cb7a69affb901a03532e93a6cd4"} Nov 25 08:46:55 crc kubenswrapper[4760]: I1125 08:46:55.798909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" event={"ID":"ed298743-8f13-44a6-bbff-1b5702a1a0f5","Type":"ContainerStarted","Data":"784fb9b526fe763defad7d3eae4d99496a3c04eb8fb18f297ebc960ddb266450"} Nov 25 08:46:55 crc kubenswrapper[4760]: I1125 08:46:55.824809 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" podStartSLOduration=2.435103315 podStartE2EDuration="2.824790706s" podCreationTimestamp="2025-11-25 08:46:53 +0000 UTC" firstStartedPulling="2025-11-25 08:46:54.739656416 +0000 UTC m=+2148.448687211" lastFinishedPulling="2025-11-25 08:46:55.129343807 +0000 UTC m=+2148.838374602" observedRunningTime="2025-11-25 08:46:55.818500485 +0000 UTC m=+2149.527531300" watchObservedRunningTime="2025-11-25 08:46:55.824790706 +0000 UTC m=+2149.533821501" Nov 25 08:47:01 crc kubenswrapper[4760]: I1125 08:47:01.746614 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:47:01 crc kubenswrapper[4760]: I1125 08:47:01.746989 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:47:18 crc kubenswrapper[4760]: I1125 08:47:18.988328 4760 generic.go:334] "Generic (PLEG): container finished" podID="ed298743-8f13-44a6-bbff-1b5702a1a0f5" containerID="784fb9b526fe763defad7d3eae4d99496a3c04eb8fb18f297ebc960ddb266450" exitCode=0 Nov 25 08:47:18 crc kubenswrapper[4760]: I1125 08:47:18.988418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" event={"ID":"ed298743-8f13-44a6-bbff-1b5702a1a0f5","Type":"ContainerDied","Data":"784fb9b526fe763defad7d3eae4d99496a3c04eb8fb18f297ebc960ddb266450"} Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.372095 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.470735 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5d49\" (UniqueName: \"kubernetes.io/projected/ed298743-8f13-44a6-bbff-1b5702a1a0f5-kube-api-access-b5d49\") pod \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.470841 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ssh-key\") pod \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.470866 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ceph\") pod \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.470899 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-inventory\") pod \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\" (UID: \"ed298743-8f13-44a6-bbff-1b5702a1a0f5\") " Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.476135 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed298743-8f13-44a6-bbff-1b5702a1a0f5-kube-api-access-b5d49" (OuterVolumeSpecName: "kube-api-access-b5d49") pod "ed298743-8f13-44a6-bbff-1b5702a1a0f5" (UID: "ed298743-8f13-44a6-bbff-1b5702a1a0f5"). InnerVolumeSpecName "kube-api-access-b5d49". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.476592 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ceph" (OuterVolumeSpecName: "ceph") pod "ed298743-8f13-44a6-bbff-1b5702a1a0f5" (UID: "ed298743-8f13-44a6-bbff-1b5702a1a0f5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.499442 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-inventory" (OuterVolumeSpecName: "inventory") pod "ed298743-8f13-44a6-bbff-1b5702a1a0f5" (UID: "ed298743-8f13-44a6-bbff-1b5702a1a0f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.499472 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ed298743-8f13-44a6-bbff-1b5702a1a0f5" (UID: "ed298743-8f13-44a6-bbff-1b5702a1a0f5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.572790 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5d49\" (UniqueName: \"kubernetes.io/projected/ed298743-8f13-44a6-bbff-1b5702a1a0f5-kube-api-access-b5d49\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.572824 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.572833 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:20 crc kubenswrapper[4760]: I1125 08:47:20.572842 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed298743-8f13-44a6-bbff-1b5702a1a0f5-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.003849 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" event={"ID":"ed298743-8f13-44a6-bbff-1b5702a1a0f5","Type":"ContainerDied","Data":"351934e567e680046f35125754b44ceb5c859cb7a69affb901a03532e93a6cd4"} Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.004163 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="351934e567e680046f35125754b44ceb5c859cb7a69affb901a03532e93a6cd4" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.003944 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.084404 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr"] Nov 25 08:47:21 crc kubenswrapper[4760]: E1125 08:47:21.084760 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed298743-8f13-44a6-bbff-1b5702a1a0f5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.084777 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed298743-8f13-44a6-bbff-1b5702a1a0f5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.084959 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed298743-8f13-44a6-bbff-1b5702a1a0f5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.085611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.087741 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.087964 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.090144 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.090455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.090597 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.092720 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr"] Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.183952 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.184271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.184519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.184969 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77n6b\" (UniqueName: \"kubernetes.io/projected/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-kube-api-access-77n6b\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.286469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.286545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.286593 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.286669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77n6b\" (UniqueName: \"kubernetes.io/projected/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-kube-api-access-77n6b\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.290569 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.290634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.291283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.303065 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77n6b\" (UniqueName: \"kubernetes.io/projected/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-kube-api-access-77n6b\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.402649 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:21 crc kubenswrapper[4760]: I1125 08:47:21.946154 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr"] Nov 25 08:47:22 crc kubenswrapper[4760]: I1125 08:47:22.014200 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" event={"ID":"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc","Type":"ContainerStarted","Data":"3ce8eeae813f04370c6463dc3c48ebaca50aacb1beeb98f4dfdf4786fa2fe6d3"} Nov 25 08:47:23 crc kubenswrapper[4760]: I1125 08:47:23.025166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" event={"ID":"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc","Type":"ContainerStarted","Data":"bd7d669c1b8f146126f4c2703f5b72a5b3fb82c8719853086524b14e8a0d69ab"} Nov 25 08:47:23 crc kubenswrapper[4760]: I1125 08:47:23.043334 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" podStartSLOduration=1.5902411459999999 podStartE2EDuration="2.043318819s" podCreationTimestamp="2025-11-25 08:47:21 +0000 UTC" firstStartedPulling="2025-11-25 08:47:21.951017754 +0000 UTC m=+2175.660048549" lastFinishedPulling="2025-11-25 08:47:22.404095427 +0000 UTC m=+2176.113126222" observedRunningTime="2025-11-25 08:47:23.041575279 +0000 UTC m=+2176.750606074" watchObservedRunningTime="2025-11-25 08:47:23.043318819 +0000 UTC m=+2176.752349614" Nov 25 08:47:28 crc kubenswrapper[4760]: I1125 08:47:28.063051 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" containerID="bd7d669c1b8f146126f4c2703f5b72a5b3fb82c8719853086524b14e8a0d69ab" exitCode=0 Nov 25 08:47:28 crc kubenswrapper[4760]: I1125 08:47:28.063122 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" event={"ID":"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc","Type":"ContainerDied","Data":"bd7d669c1b8f146126f4c2703f5b72a5b3fb82c8719853086524b14e8a0d69ab"} Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.465218 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.532333 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77n6b\" (UniqueName: \"kubernetes.io/projected/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-kube-api-access-77n6b\") pod \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.532631 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ssh-key\") pod \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.532677 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ceph\") pod \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.532708 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-inventory\") pod \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\" (UID: \"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc\") " Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.539709 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-kube-api-access-77n6b" (OuterVolumeSpecName: "kube-api-access-77n6b") pod "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" (UID: "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc"). InnerVolumeSpecName "kube-api-access-77n6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.543339 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ceph" (OuterVolumeSpecName: "ceph") pod "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" (UID: "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.557333 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" (UID: "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.557353 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-inventory" (OuterVolumeSpecName: "inventory") pod "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" (UID: "fd5f7e13-b05e-4843-930f-62a3bf6e7ddc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.635955 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.636180 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.636393 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:29 crc kubenswrapper[4760]: I1125 08:47:29.636458 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77n6b\" (UniqueName: \"kubernetes.io/projected/fd5f7e13-b05e-4843-930f-62a3bf6e7ddc-kube-api-access-77n6b\") on node \"crc\" DevicePath \"\"" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.082165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" event={"ID":"fd5f7e13-b05e-4843-930f-62a3bf6e7ddc","Type":"ContainerDied","Data":"3ce8eeae813f04370c6463dc3c48ebaca50aacb1beeb98f4dfdf4786fa2fe6d3"} Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.082484 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce8eeae813f04370c6463dc3c48ebaca50aacb1beeb98f4dfdf4786fa2fe6d3" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.082481 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.190611 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn"] Nov 25 08:47:30 crc kubenswrapper[4760]: E1125 08:47:30.191047 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.191065 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.191230 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5f7e13-b05e-4843-930f-62a3bf6e7ddc" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.191833 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.193692 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.194124 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.194221 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.194295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.194328 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.206731 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn"] Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.243499 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.243629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.243727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j88bc\" (UniqueName: \"kubernetes.io/projected/e3e21edb-5737-49cd-bc9c-407e5f7f5445-kube-api-access-j88bc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.243912 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.344780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.344959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.345036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.345096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j88bc\" (UniqueName: \"kubernetes.io/projected/e3e21edb-5737-49cd-bc9c-407e5f7f5445-kube-api-access-j88bc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.349031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.349639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.352777 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.361229 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j88bc\" (UniqueName: \"kubernetes.io/projected/e3e21edb-5737-49cd-bc9c-407e5f7f5445-kube-api-access-j88bc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ggjjn\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:30 crc kubenswrapper[4760]: I1125 08:47:30.511192 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:47:31 crc kubenswrapper[4760]: I1125 08:47:31.015807 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn"] Nov 25 08:47:31 crc kubenswrapper[4760]: I1125 08:47:31.090213 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" event={"ID":"e3e21edb-5737-49cd-bc9c-407e5f7f5445","Type":"ContainerStarted","Data":"db3a6faf04fcbd1a7dc3b48d3be2430d2baf4046aab0b93bd64b948f4f7a10d6"} Nov 25 08:47:31 crc kubenswrapper[4760]: I1125 08:47:31.745974 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:47:31 crc kubenswrapper[4760]: I1125 08:47:31.746377 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:47:31 crc kubenswrapper[4760]: I1125 08:47:31.746423 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:47:31 crc kubenswrapper[4760]: I1125 08:47:31.747276 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a867ce918c353a52f7d744d4ae5764d73a3af9c88d9c5804bb0260064416eb30"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:47:31 crc kubenswrapper[4760]: I1125 08:47:31.747342 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://a867ce918c353a52f7d744d4ae5764d73a3af9c88d9c5804bb0260064416eb30" gracePeriod=600 Nov 25 08:47:32 crc kubenswrapper[4760]: I1125 08:47:32.103474 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" event={"ID":"e3e21edb-5737-49cd-bc9c-407e5f7f5445","Type":"ContainerStarted","Data":"599f9a664b4ff6b7f24073fa77b40dabd0ce3888ef525d3df7a5959367cc0ca6"} Nov 25 08:47:32 crc kubenswrapper[4760]: I1125 08:47:32.118109 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="a867ce918c353a52f7d744d4ae5764d73a3af9c88d9c5804bb0260064416eb30" exitCode=0 Nov 25 08:47:32 crc kubenswrapper[4760]: I1125 08:47:32.118179 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"a867ce918c353a52f7d744d4ae5764d73a3af9c88d9c5804bb0260064416eb30"} Nov 25 08:47:32 crc kubenswrapper[4760]: I1125 08:47:32.118223 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72"} Nov 25 08:47:32 crc kubenswrapper[4760]: I1125 08:47:32.118285 4760 scope.go:117] "RemoveContainer" containerID="4be479deaff756ff04467aad83c5652587d5c60f37439e3baa52b177b7a3d21c" Nov 25 08:47:32 crc kubenswrapper[4760]: I1125 08:47:32.159964 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" podStartSLOduration=1.713458599 podStartE2EDuration="2.159944632s" podCreationTimestamp="2025-11-25 08:47:30 +0000 UTC" firstStartedPulling="2025-11-25 08:47:31.023644062 +0000 UTC m=+2184.732674857" lastFinishedPulling="2025-11-25 08:47:31.470130095 +0000 UTC m=+2185.179160890" observedRunningTime="2025-11-25 08:47:32.15638333 +0000 UTC m=+2185.865414135" watchObservedRunningTime="2025-11-25 08:47:32.159944632 +0000 UTC m=+2185.868975427" Nov 25 08:48:07 crc kubenswrapper[4760]: I1125 08:48:07.426150 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3e21edb-5737-49cd-bc9c-407e5f7f5445" containerID="599f9a664b4ff6b7f24073fa77b40dabd0ce3888ef525d3df7a5959367cc0ca6" exitCode=0 Nov 25 08:48:07 crc kubenswrapper[4760]: I1125 08:48:07.426225 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" event={"ID":"e3e21edb-5737-49cd-bc9c-407e5f7f5445","Type":"ContainerDied","Data":"599f9a664b4ff6b7f24073fa77b40dabd0ce3888ef525d3df7a5959367cc0ca6"} Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.864078 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.973375 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ceph\") pod \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.973417 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j88bc\" (UniqueName: \"kubernetes.io/projected/e3e21edb-5737-49cd-bc9c-407e5f7f5445-kube-api-access-j88bc\") pod \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.973601 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ssh-key\") pod \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.973655 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-inventory\") pod \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\" (UID: \"e3e21edb-5737-49cd-bc9c-407e5f7f5445\") " Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.978685 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e21edb-5737-49cd-bc9c-407e5f7f5445-kube-api-access-j88bc" (OuterVolumeSpecName: "kube-api-access-j88bc") pod "e3e21edb-5737-49cd-bc9c-407e5f7f5445" (UID: "e3e21edb-5737-49cd-bc9c-407e5f7f5445"). InnerVolumeSpecName "kube-api-access-j88bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.980192 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ceph" (OuterVolumeSpecName: "ceph") pod "e3e21edb-5737-49cd-bc9c-407e5f7f5445" (UID: "e3e21edb-5737-49cd-bc9c-407e5f7f5445"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.998703 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-inventory" (OuterVolumeSpecName: "inventory") pod "e3e21edb-5737-49cd-bc9c-407e5f7f5445" (UID: "e3e21edb-5737-49cd-bc9c-407e5f7f5445"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:08 crc kubenswrapper[4760]: I1125 08:48:08.999528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3e21edb-5737-49cd-bc9c-407e5f7f5445" (UID: "e3e21edb-5737-49cd-bc9c-407e5f7f5445"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.075938 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.075993 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j88bc\" (UniqueName: \"kubernetes.io/projected/e3e21edb-5737-49cd-bc9c-407e5f7f5445-kube-api-access-j88bc\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.076018 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.076042 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3e21edb-5737-49cd-bc9c-407e5f7f5445-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.445863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" event={"ID":"e3e21edb-5737-49cd-bc9c-407e5f7f5445","Type":"ContainerDied","Data":"db3a6faf04fcbd1a7dc3b48d3be2430d2baf4046aab0b93bd64b948f4f7a10d6"} Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.445952 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db3a6faf04fcbd1a7dc3b48d3be2430d2baf4046aab0b93bd64b948f4f7a10d6" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.445968 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ggjjn" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.544103 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb"] Nov 25 08:48:09 crc kubenswrapper[4760]: E1125 08:48:09.545506 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e21edb-5737-49cd-bc9c-407e5f7f5445" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.545527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e21edb-5737-49cd-bc9c-407e5f7f5445" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.545770 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e21edb-5737-49cd-bc9c-407e5f7f5445" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.546599 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.550729 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.550793 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.551266 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.551322 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.551547 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.559353 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb"] Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.593041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.593093 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.593525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6hl\" (UniqueName: \"kubernetes.io/projected/60d03216-7d4d-433d-9e84-7b6a6b399a5f-kube-api-access-bf6hl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.593551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.694528 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.694627 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.694861 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6hl\" (UniqueName: \"kubernetes.io/projected/60d03216-7d4d-433d-9e84-7b6a6b399a5f-kube-api-access-bf6hl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.694922 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.699583 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.701460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.708211 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.713176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6hl\" (UniqueName: \"kubernetes.io/projected/60d03216-7d4d-433d-9e84-7b6a6b399a5f-kube-api-access-bf6hl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:09 crc kubenswrapper[4760]: I1125 08:48:09.867944 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:10 crc kubenswrapper[4760]: I1125 08:48:10.429219 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb"] Nov 25 08:48:10 crc kubenswrapper[4760]: I1125 08:48:10.455793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" event={"ID":"60d03216-7d4d-433d-9e84-7b6a6b399a5f","Type":"ContainerStarted","Data":"980993edb285d34c2b5f62f6b0c5a5e98b2bca3e11694c1cc7862be4d73b0cb1"} Nov 25 08:48:11 crc kubenswrapper[4760]: I1125 08:48:11.466660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" event={"ID":"60d03216-7d4d-433d-9e84-7b6a6b399a5f","Type":"ContainerStarted","Data":"201f4bd3f93f5c0bf192b8fbbbcff7a34859976b6b711727ce70b349ed09fd6f"} Nov 25 08:48:11 crc kubenswrapper[4760]: I1125 08:48:11.488995 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" podStartSLOduration=2.042853005 podStartE2EDuration="2.488976828s" podCreationTimestamp="2025-11-25 08:48:09 +0000 UTC" firstStartedPulling="2025-11-25 08:48:10.432869743 +0000 UTC m=+2224.141900538" lastFinishedPulling="2025-11-25 08:48:10.878993556 +0000 UTC m=+2224.588024361" observedRunningTime="2025-11-25 08:48:11.480655499 +0000 UTC m=+2225.189686304" watchObservedRunningTime="2025-11-25 08:48:11.488976828 +0000 UTC m=+2225.198007623" Nov 25 08:48:15 crc kubenswrapper[4760]: I1125 08:48:15.543864 4760 generic.go:334] "Generic (PLEG): container finished" podID="60d03216-7d4d-433d-9e84-7b6a6b399a5f" containerID="201f4bd3f93f5c0bf192b8fbbbcff7a34859976b6b711727ce70b349ed09fd6f" exitCode=0 Nov 25 08:48:15 crc kubenswrapper[4760]: I1125 08:48:15.543944 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" event={"ID":"60d03216-7d4d-433d-9e84-7b6a6b399a5f","Type":"ContainerDied","Data":"201f4bd3f93f5c0bf192b8fbbbcff7a34859976b6b711727ce70b349ed09fd6f"} Nov 25 08:48:16 crc kubenswrapper[4760]: I1125 08:48:16.969805 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.110451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf6hl\" (UniqueName: \"kubernetes.io/projected/60d03216-7d4d-433d-9e84-7b6a6b399a5f-kube-api-access-bf6hl\") pod \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.110532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ceph\") pod \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.110640 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-inventory\") pod \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.110729 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ssh-key\") pod \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\" (UID: \"60d03216-7d4d-433d-9e84-7b6a6b399a5f\") " Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.120453 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ceph" (OuterVolumeSpecName: "ceph") pod "60d03216-7d4d-433d-9e84-7b6a6b399a5f" (UID: "60d03216-7d4d-433d-9e84-7b6a6b399a5f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.120470 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d03216-7d4d-433d-9e84-7b6a6b399a5f-kube-api-access-bf6hl" (OuterVolumeSpecName: "kube-api-access-bf6hl") pod "60d03216-7d4d-433d-9e84-7b6a6b399a5f" (UID: "60d03216-7d4d-433d-9e84-7b6a6b399a5f"). InnerVolumeSpecName "kube-api-access-bf6hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.142140 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "60d03216-7d4d-433d-9e84-7b6a6b399a5f" (UID: "60d03216-7d4d-433d-9e84-7b6a6b399a5f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.147426 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-inventory" (OuterVolumeSpecName: "inventory") pod "60d03216-7d4d-433d-9e84-7b6a6b399a5f" (UID: "60d03216-7d4d-433d-9e84-7b6a6b399a5f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.212879 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf6hl\" (UniqueName: \"kubernetes.io/projected/60d03216-7d4d-433d-9e84-7b6a6b399a5f-kube-api-access-bf6hl\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.212923 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.212934 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.212942 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60d03216-7d4d-433d-9e84-7b6a6b399a5f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.560818 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" event={"ID":"60d03216-7d4d-433d-9e84-7b6a6b399a5f","Type":"ContainerDied","Data":"980993edb285d34c2b5f62f6b0c5a5e98b2bca3e11694c1cc7862be4d73b0cb1"} Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.561043 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="980993edb285d34c2b5f62f6b0c5a5e98b2bca3e11694c1cc7862be4d73b0cb1" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.560899 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.624053 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv"] Nov 25 08:48:17 crc kubenswrapper[4760]: E1125 08:48:17.624511 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d03216-7d4d-433d-9e84-7b6a6b399a5f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.624530 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d03216-7d4d-433d-9e84-7b6a6b399a5f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.624776 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d03216-7d4d-433d-9e84-7b6a6b399a5f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.625540 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.630722 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.630829 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.631097 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.631533 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.631766 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.639619 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv"] Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.736625 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.736953 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88nw\" (UniqueName: \"kubernetes.io/projected/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-kube-api-access-c88nw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.737114 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.737228 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.838449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.839233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88nw\" (UniqueName: \"kubernetes.io/projected/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-kube-api-access-c88nw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.839472 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.839614 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.842338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.842559 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.842755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.856048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88nw\" (UniqueName: \"kubernetes.io/projected/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-kube-api-access-c88nw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-824fv\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:17 crc kubenswrapper[4760]: I1125 08:48:17.947824 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:18 crc kubenswrapper[4760]: I1125 08:48:18.464209 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv"] Nov 25 08:48:18 crc kubenswrapper[4760]: I1125 08:48:18.571026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" event={"ID":"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4","Type":"ContainerStarted","Data":"521b2e57602f6a2bfb902663d3a8b6c002748641b87a76ed195f174eaf68d8b8"} Nov 25 08:48:19 crc kubenswrapper[4760]: I1125 08:48:19.580482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" event={"ID":"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4","Type":"ContainerStarted","Data":"df06f436f107e5bbff0a8ee12ea3b68c4f10e2090e1a69d9170e90d13109dc2c"} Nov 25 08:48:19 crc kubenswrapper[4760]: I1125 08:48:19.597043 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" podStartSLOduration=2.178675437 podStartE2EDuration="2.597023482s" podCreationTimestamp="2025-11-25 08:48:17 +0000 UTC" firstStartedPulling="2025-11-25 08:48:18.469079972 +0000 UTC m=+2232.178110767" lastFinishedPulling="2025-11-25 08:48:18.887428017 +0000 UTC m=+2232.596458812" observedRunningTime="2025-11-25 08:48:19.592870773 +0000 UTC m=+2233.301901578" watchObservedRunningTime="2025-11-25 08:48:19.597023482 +0000 UTC m=+2233.306054277" Nov 25 08:48:57 crc kubenswrapper[4760]: I1125 08:48:57.890602 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" containerID="df06f436f107e5bbff0a8ee12ea3b68c4f10e2090e1a69d9170e90d13109dc2c" exitCode=0 Nov 25 08:48:57 crc kubenswrapper[4760]: I1125 08:48:57.890692 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" event={"ID":"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4","Type":"ContainerDied","Data":"df06f436f107e5bbff0a8ee12ea3b68c4f10e2090e1a69d9170e90d13109dc2c"} Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.276410 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.414395 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ceph\") pod \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.414538 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-inventory\") pod \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.414747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ssh-key\") pod \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.414801 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88nw\" (UniqueName: \"kubernetes.io/projected/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-kube-api-access-c88nw\") pod \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\" (UID: \"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4\") " Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.422131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-kube-api-access-c88nw" (OuterVolumeSpecName: "kube-api-access-c88nw") pod "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" (UID: "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4"). InnerVolumeSpecName "kube-api-access-c88nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.422779 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ceph" (OuterVolumeSpecName: "ceph") pod "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" (UID: "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.444586 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-inventory" (OuterVolumeSpecName: "inventory") pod "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" (UID: "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.464548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" (UID: "bbb80fb1-9cd8-4326-9db9-88edd50fc0d4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.516570 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.516615 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.516647 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.516661 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88nw\" (UniqueName: \"kubernetes.io/projected/bbb80fb1-9cd8-4326-9db9-88edd50fc0d4-kube-api-access-c88nw\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.909096 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" event={"ID":"bbb80fb1-9cd8-4326-9db9-88edd50fc0d4","Type":"ContainerDied","Data":"521b2e57602f6a2bfb902663d3a8b6c002748641b87a76ed195f174eaf68d8b8"} Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.909146 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="521b2e57602f6a2bfb902663d3a8b6c002748641b87a76ed195f174eaf68d8b8" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.909649 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-824fv" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.991047 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jsv2p"] Nov 25 08:48:59 crc kubenswrapper[4760]: E1125 08:48:59.991519 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.991543 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.991792 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb80fb1-9cd8-4326-9db9-88edd50fc0d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.992559 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.995393 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.995439 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.995582 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.995884 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:48:59 crc kubenswrapper[4760]: I1125 08:48:59.998735 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.006564 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jsv2p"] Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.129687 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.129730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ceph\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.129826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.130009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8682\" (UniqueName: \"kubernetes.io/projected/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-kube-api-access-f8682\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.231413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8682\" (UniqueName: \"kubernetes.io/projected/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-kube-api-access-f8682\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.231536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.231569 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ceph\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.231654 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.236934 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.236939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.238897 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ceph\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.250602 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8682\" (UniqueName: \"kubernetes.io/projected/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-kube-api-access-f8682\") pod \"ssh-known-hosts-edpm-deployment-jsv2p\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.310350 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.609740 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jsv2p"] Nov 25 08:49:00 crc kubenswrapper[4760]: I1125 08:49:00.917362 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" event={"ID":"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d","Type":"ContainerStarted","Data":"234844f995a858aa9f5836da1057d78581ba17704849e6410fe61874882464ba"} Nov 25 08:49:01 crc kubenswrapper[4760]: I1125 08:49:01.925894 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" event={"ID":"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d","Type":"ContainerStarted","Data":"8ea95586aa0e25fcce9c9434d27dc18d8f6135f5cffc766d8e745d4cd889ff9e"} Nov 25 08:49:01 crc kubenswrapper[4760]: I1125 08:49:01.948939 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" podStartSLOduration=2.482540908 podStartE2EDuration="2.948919583s" podCreationTimestamp="2025-11-25 08:48:59 +0000 UTC" firstStartedPulling="2025-11-25 08:49:00.614419266 +0000 UTC m=+2274.323450071" lastFinishedPulling="2025-11-25 08:49:01.080797951 +0000 UTC m=+2274.789828746" observedRunningTime="2025-11-25 08:49:01.939289006 +0000 UTC m=+2275.648319801" watchObservedRunningTime="2025-11-25 08:49:01.948919583 +0000 UTC m=+2275.657950378" Nov 25 08:49:09 crc kubenswrapper[4760]: I1125 08:49:09.985205 4760 generic.go:334] "Generic (PLEG): container finished" podID="6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" containerID="8ea95586aa0e25fcce9c9434d27dc18d8f6135f5cffc766d8e745d4cd889ff9e" exitCode=0 Nov 25 08:49:09 crc kubenswrapper[4760]: I1125 08:49:09.985305 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" event={"ID":"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d","Type":"ContainerDied","Data":"8ea95586aa0e25fcce9c9434d27dc18d8f6135f5cffc766d8e745d4cd889ff9e"} Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.390940 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.538971 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-inventory-0\") pod \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.539499 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ssh-key-openstack-edpm-ipam\") pod \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.539605 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ceph\") pod \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.539649 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8682\" (UniqueName: \"kubernetes.io/projected/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-kube-api-access-f8682\") pod \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\" (UID: \"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d\") " Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.545611 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-kube-api-access-f8682" (OuterVolumeSpecName: "kube-api-access-f8682") pod "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" (UID: "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d"). InnerVolumeSpecName "kube-api-access-f8682". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.550511 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ceph" (OuterVolumeSpecName: "ceph") pod "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" (UID: "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.566786 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" (UID: "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.567390 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" (UID: "6f68ee3f-7d13-433a-bc6b-504e98ff7b1d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.642077 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.642118 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.642133 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8682\" (UniqueName: \"kubernetes.io/projected/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-kube-api-access-f8682\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:11 crc kubenswrapper[4760]: I1125 08:49:11.642144 4760 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6f68ee3f-7d13-433a-bc6b-504e98ff7b1d-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.009022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" event={"ID":"6f68ee3f-7d13-433a-bc6b-504e98ff7b1d","Type":"ContainerDied","Data":"234844f995a858aa9f5836da1057d78581ba17704849e6410fe61874882464ba"} Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.009078 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="234844f995a858aa9f5836da1057d78581ba17704849e6410fe61874882464ba" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.009097 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jsv2p" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.071123 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh"] Nov 25 08:49:12 crc kubenswrapper[4760]: E1125 08:49:12.071541 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" containerName="ssh-known-hosts-edpm-deployment" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.071560 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" containerName="ssh-known-hosts-edpm-deployment" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.071763 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f68ee3f-7d13-433a-bc6b-504e98ff7b1d" containerName="ssh-known-hosts-edpm-deployment" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.072354 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.075210 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.076051 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.077723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.084937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.085307 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.092569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh"] Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.254337 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmdk\" (UniqueName: \"kubernetes.io/projected/907a9527-c37d-4e36-9a7e-35066c230b6d-kube-api-access-7lmdk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.254452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.254487 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.254527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.356299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.356859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.356954 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.357152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmdk\" (UniqueName: \"kubernetes.io/projected/907a9527-c37d-4e36-9a7e-35066c230b6d-kube-api-access-7lmdk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.362121 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.362887 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.363998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.373572 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmdk\" (UniqueName: \"kubernetes.io/projected/907a9527-c37d-4e36-9a7e-35066c230b6d-kube-api-access-7lmdk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-n86sh\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.391212 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:12 crc kubenswrapper[4760]: I1125 08:49:12.901661 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh"] Nov 25 08:49:13 crc kubenswrapper[4760]: I1125 08:49:13.030642 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" event={"ID":"907a9527-c37d-4e36-9a7e-35066c230b6d","Type":"ContainerStarted","Data":"26d9fae3ec854097683e6996ccd4a0402e24a20fd4ce04b41548d3a781193cdd"} Nov 25 08:49:14 crc kubenswrapper[4760]: I1125 08:49:14.042899 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" event={"ID":"907a9527-c37d-4e36-9a7e-35066c230b6d","Type":"ContainerStarted","Data":"17765e50ce600d6554c014229b88b1f40228ea61294e6ab11a9aa62287eec5a7"} Nov 25 08:49:14 crc kubenswrapper[4760]: I1125 08:49:14.069686 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" podStartSLOduration=1.609866914 podStartE2EDuration="2.06967037s" podCreationTimestamp="2025-11-25 08:49:12 +0000 UTC" firstStartedPulling="2025-11-25 08:49:12.907054575 +0000 UTC m=+2286.616085380" lastFinishedPulling="2025-11-25 08:49:13.366858041 +0000 UTC m=+2287.075888836" observedRunningTime="2025-11-25 08:49:14.062978828 +0000 UTC m=+2287.772009663" watchObservedRunningTime="2025-11-25 08:49:14.06967037 +0000 UTC m=+2287.778701155" Nov 25 08:49:21 crc kubenswrapper[4760]: I1125 08:49:21.119375 4760 generic.go:334] "Generic (PLEG): container finished" podID="907a9527-c37d-4e36-9a7e-35066c230b6d" containerID="17765e50ce600d6554c014229b88b1f40228ea61294e6ab11a9aa62287eec5a7" exitCode=0 Nov 25 08:49:21 crc kubenswrapper[4760]: I1125 08:49:21.119491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" event={"ID":"907a9527-c37d-4e36-9a7e-35066c230b6d","Type":"ContainerDied","Data":"17765e50ce600d6554c014229b88b1f40228ea61294e6ab11a9aa62287eec5a7"} Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.536768 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.648442 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lmdk\" (UniqueName: \"kubernetes.io/projected/907a9527-c37d-4e36-9a7e-35066c230b6d-kube-api-access-7lmdk\") pod \"907a9527-c37d-4e36-9a7e-35066c230b6d\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.648493 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ssh-key\") pod \"907a9527-c37d-4e36-9a7e-35066c230b6d\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.648512 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-inventory\") pod \"907a9527-c37d-4e36-9a7e-35066c230b6d\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.648537 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ceph\") pod \"907a9527-c37d-4e36-9a7e-35066c230b6d\" (UID: \"907a9527-c37d-4e36-9a7e-35066c230b6d\") " Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.654109 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ceph" (OuterVolumeSpecName: "ceph") pod "907a9527-c37d-4e36-9a7e-35066c230b6d" (UID: "907a9527-c37d-4e36-9a7e-35066c230b6d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.654554 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907a9527-c37d-4e36-9a7e-35066c230b6d-kube-api-access-7lmdk" (OuterVolumeSpecName: "kube-api-access-7lmdk") pod "907a9527-c37d-4e36-9a7e-35066c230b6d" (UID: "907a9527-c37d-4e36-9a7e-35066c230b6d"). InnerVolumeSpecName "kube-api-access-7lmdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.675302 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-inventory" (OuterVolumeSpecName: "inventory") pod "907a9527-c37d-4e36-9a7e-35066c230b6d" (UID: "907a9527-c37d-4e36-9a7e-35066c230b6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.675874 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "907a9527-c37d-4e36-9a7e-35066c230b6d" (UID: "907a9527-c37d-4e36-9a7e-35066c230b6d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.751662 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lmdk\" (UniqueName: \"kubernetes.io/projected/907a9527-c37d-4e36-9a7e-35066c230b6d-kube-api-access-7lmdk\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.751692 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.751703 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:22 crc kubenswrapper[4760]: I1125 08:49:22.751712 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/907a9527-c37d-4e36-9a7e-35066c230b6d-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.137851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" event={"ID":"907a9527-c37d-4e36-9a7e-35066c230b6d","Type":"ContainerDied","Data":"26d9fae3ec854097683e6996ccd4a0402e24a20fd4ce04b41548d3a781193cdd"} Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.137892 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9fae3ec854097683e6996ccd4a0402e24a20fd4ce04b41548d3a781193cdd" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.137957 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-n86sh" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.220663 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp"] Nov 25 08:49:23 crc kubenswrapper[4760]: E1125 08:49:23.221020 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907a9527-c37d-4e36-9a7e-35066c230b6d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.221038 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="907a9527-c37d-4e36-9a7e-35066c230b6d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.221225 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="907a9527-c37d-4e36-9a7e-35066c230b6d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.221828 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.224470 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.225137 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.225174 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.225192 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.226586 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.236232 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp"] Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.261456 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.261563 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqw6\" (UniqueName: \"kubernetes.io/projected/375f35df-5fe0-4456-9d10-649e72a962a7-kube-api-access-4vqw6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.261726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.261766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.363194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.363515 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.363627 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.363730 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqw6\" (UniqueName: \"kubernetes.io/projected/375f35df-5fe0-4456-9d10-649e72a962a7-kube-api-access-4vqw6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.372059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.372059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.372131 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.379941 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqw6\" (UniqueName: \"kubernetes.io/projected/375f35df-5fe0-4456-9d10-649e72a962a7-kube-api-access-4vqw6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:23 crc kubenswrapper[4760]: I1125 08:49:23.536339 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:24 crc kubenswrapper[4760]: I1125 08:49:24.042110 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp"] Nov 25 08:49:24 crc kubenswrapper[4760]: I1125 08:49:24.146433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" event={"ID":"375f35df-5fe0-4456-9d10-649e72a962a7","Type":"ContainerStarted","Data":"64b935bb419d695a25e95457489376a89c131a7b4a857a363c39647a35c25b86"} Nov 25 08:49:25 crc kubenswrapper[4760]: I1125 08:49:25.154652 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" event={"ID":"375f35df-5fe0-4456-9d10-649e72a962a7","Type":"ContainerStarted","Data":"73f9952e5757682305c9b611b6896ea37ca5d6554a47b46be546c5350fe0d7ad"} Nov 25 08:49:25 crc kubenswrapper[4760]: I1125 08:49:25.171840 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" podStartSLOduration=1.663010667 podStartE2EDuration="2.171824032s" podCreationTimestamp="2025-11-25 08:49:23 +0000 UTC" firstStartedPulling="2025-11-25 08:49:24.04970481 +0000 UTC m=+2297.758735605" lastFinishedPulling="2025-11-25 08:49:24.558518175 +0000 UTC m=+2298.267548970" observedRunningTime="2025-11-25 08:49:25.171637716 +0000 UTC m=+2298.880668532" watchObservedRunningTime="2025-11-25 08:49:25.171824032 +0000 UTC m=+2298.880854827" Nov 25 08:49:34 crc kubenswrapper[4760]: I1125 08:49:34.222231 4760 generic.go:334] "Generic (PLEG): container finished" podID="375f35df-5fe0-4456-9d10-649e72a962a7" containerID="73f9952e5757682305c9b611b6896ea37ca5d6554a47b46be546c5350fe0d7ad" exitCode=0 Nov 25 08:49:34 crc kubenswrapper[4760]: I1125 08:49:34.222333 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" event={"ID":"375f35df-5fe0-4456-9d10-649e72a962a7","Type":"ContainerDied","Data":"73f9952e5757682305c9b611b6896ea37ca5d6554a47b46be546c5350fe0d7ad"} Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.599461 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.605905 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ceph\") pod \"375f35df-5fe0-4456-9d10-649e72a962a7\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.605940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-inventory\") pod \"375f35df-5fe0-4456-9d10-649e72a962a7\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.605969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vqw6\" (UniqueName: \"kubernetes.io/projected/375f35df-5fe0-4456-9d10-649e72a962a7-kube-api-access-4vqw6\") pod \"375f35df-5fe0-4456-9d10-649e72a962a7\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.606001 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ssh-key\") pod \"375f35df-5fe0-4456-9d10-649e72a962a7\" (UID: \"375f35df-5fe0-4456-9d10-649e72a962a7\") " Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.611225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ceph" (OuterVolumeSpecName: "ceph") pod "375f35df-5fe0-4456-9d10-649e72a962a7" (UID: "375f35df-5fe0-4456-9d10-649e72a962a7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.612072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375f35df-5fe0-4456-9d10-649e72a962a7-kube-api-access-4vqw6" (OuterVolumeSpecName: "kube-api-access-4vqw6") pod "375f35df-5fe0-4456-9d10-649e72a962a7" (UID: "375f35df-5fe0-4456-9d10-649e72a962a7"). InnerVolumeSpecName "kube-api-access-4vqw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.633842 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "375f35df-5fe0-4456-9d10-649e72a962a7" (UID: "375f35df-5fe0-4456-9d10-649e72a962a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.640321 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-inventory" (OuterVolumeSpecName: "inventory") pod "375f35df-5fe0-4456-9d10-649e72a962a7" (UID: "375f35df-5fe0-4456-9d10-649e72a962a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.707588 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.707621 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.707635 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vqw6\" (UniqueName: \"kubernetes.io/projected/375f35df-5fe0-4456-9d10-649e72a962a7-kube-api-access-4vqw6\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:35 crc kubenswrapper[4760]: I1125 08:49:35.707651 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375f35df-5fe0-4456-9d10-649e72a962a7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.239237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" event={"ID":"375f35df-5fe0-4456-9d10-649e72a962a7","Type":"ContainerDied","Data":"64b935bb419d695a25e95457489376a89c131a7b4a857a363c39647a35c25b86"} Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.239520 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b935bb419d695a25e95457489376a89c131a7b4a857a363c39647a35c25b86" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.239319 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.316900 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j"] Nov 25 08:49:36 crc kubenswrapper[4760]: E1125 08:49:36.317270 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375f35df-5fe0-4456-9d10-649e72a962a7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.317288 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="375f35df-5fe0-4456-9d10-649e72a962a7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.317472 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="375f35df-5fe0-4456-9d10-649e72a962a7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.318049 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.321606 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.321737 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.331060 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.331568 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.332343 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.332569 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.332748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.332942 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.337022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j"] Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519605 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519704 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519744 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9sff\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-kube-api-access-d9sff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519776 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519799 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519921 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.519989 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.520015 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.520049 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621284 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9sff\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-kube-api-access-d9sff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621393 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621425 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621458 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621575 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.621615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.626876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.626964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.627219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.627586 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.627713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.628186 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.628673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.628781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.629119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.629179 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.630136 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.631801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.641776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9sff\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-kube-api-access-d9sff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:36 crc kubenswrapper[4760]: I1125 08:49:36.940384 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:49:37 crc kubenswrapper[4760]: I1125 08:49:37.436725 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j"] Nov 25 08:49:37 crc kubenswrapper[4760]: I1125 08:49:37.444025 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:49:38 crc kubenswrapper[4760]: I1125 08:49:38.258440 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" event={"ID":"be1883ad-ca79-4bec-89f9-9b783c5047df","Type":"ContainerStarted","Data":"e500b4d170a1daf3d0745791fada07b6b9bfbb32128a152257650bebfc883475"} Nov 25 08:49:38 crc kubenswrapper[4760]: I1125 08:49:38.258778 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" event={"ID":"be1883ad-ca79-4bec-89f9-9b783c5047df","Type":"ContainerStarted","Data":"b880a7fb6fcb9b7a75b4251a2446a7b6a1172c2b187b7f6a50cad453e6d429ce"} Nov 25 08:49:38 crc kubenswrapper[4760]: I1125 08:49:38.281027 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" podStartSLOduration=1.883088532 podStartE2EDuration="2.28100974s" podCreationTimestamp="2025-11-25 08:49:36 +0000 UTC" firstStartedPulling="2025-11-25 08:49:37.443802456 +0000 UTC m=+2311.152833251" lastFinishedPulling="2025-11-25 08:49:37.841723644 +0000 UTC m=+2311.550754459" observedRunningTime="2025-11-25 08:49:38.274096251 +0000 UTC m=+2311.983127046" watchObservedRunningTime="2025-11-25 08:49:38.28100974 +0000 UTC m=+2311.990040535" Nov 25 08:50:01 crc kubenswrapper[4760]: I1125 08:50:01.746311 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:50:01 crc kubenswrapper[4760]: I1125 08:50:01.746854 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:50:07 crc kubenswrapper[4760]: I1125 08:50:07.494146 4760 generic.go:334] "Generic (PLEG): container finished" podID="be1883ad-ca79-4bec-89f9-9b783c5047df" containerID="e500b4d170a1daf3d0745791fada07b6b9bfbb32128a152257650bebfc883475" exitCode=0 Nov 25 08:50:07 crc kubenswrapper[4760]: I1125 08:50:07.494219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" event={"ID":"be1883ad-ca79-4bec-89f9-9b783c5047df","Type":"ContainerDied","Data":"e500b4d170a1daf3d0745791fada07b6b9bfbb32128a152257650bebfc883475"} Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.893969 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.957917 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-ovn-default-certs-0\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958170 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-inventory\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958190 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958227 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9sff\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-kube-api-access-d9sff\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-nova-combined-ca-bundle\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ovn-combined-ca-bundle\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958612 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-bootstrap-combined-ca-bundle\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958665 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-repo-setup-combined-ca-bundle\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958705 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-neutron-metadata-combined-ca-bundle\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958720 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ceph\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958794 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-libvirt-combined-ca-bundle\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958825 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ssh-key\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.958864 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"be1883ad-ca79-4bec-89f9-9b783c5047df\" (UID: \"be1883ad-ca79-4bec-89f9-9b783c5047df\") " Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.964585 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.964653 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.964711 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.965244 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.965324 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.965444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-kube-api-access-d9sff" (OuterVolumeSpecName: "kube-api-access-d9sff") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "kube-api-access-d9sff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.965719 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.967612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ceph" (OuterVolumeSpecName: "ceph") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.968441 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.968829 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.968859 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.988565 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-inventory" (OuterVolumeSpecName: "inventory") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:08 crc kubenswrapper[4760]: I1125 08:50:08.997571 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be1883ad-ca79-4bec-89f9-9b783c5047df" (UID: "be1883ad-ca79-4bec-89f9-9b783c5047df"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.060951 4760 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061050 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061072 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061090 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061109 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061130 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061148 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061164 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061182 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061201 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061219 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be1883ad-ca79-4bec-89f9-9b783c5047df-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061238 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.061286 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9sff\" (UniqueName: \"kubernetes.io/projected/be1883ad-ca79-4bec-89f9-9b783c5047df-kube-api-access-d9sff\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.510043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" event={"ID":"be1883ad-ca79-4bec-89f9-9b783c5047df","Type":"ContainerDied","Data":"b880a7fb6fcb9b7a75b4251a2446a7b6a1172c2b187b7f6a50cad453e6d429ce"} Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.510267 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b880a7fb6fcb9b7a75b4251a2446a7b6a1172c2b187b7f6a50cad453e6d429ce" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.510172 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.622158 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb"] Nov 25 08:50:09 crc kubenswrapper[4760]: E1125 08:50:09.622582 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1883ad-ca79-4bec-89f9-9b783c5047df" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.622599 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1883ad-ca79-4bec-89f9-9b783c5047df" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.622754 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1883ad-ca79-4bec-89f9-9b783c5047df" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.623331 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.625269 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.625633 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.625869 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.625936 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.629986 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.633972 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb"] Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.674260 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.674557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.674583 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.674779 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fst9r\" (UniqueName: \"kubernetes.io/projected/5d87e41c-e89d-4b52-83b7-79d77bee80d9-kube-api-access-fst9r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.776054 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fst9r\" (UniqueName: \"kubernetes.io/projected/5d87e41c-e89d-4b52-83b7-79d77bee80d9-kube-api-access-fst9r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.776147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.776207 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.776228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.779811 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.791932 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.802242 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.815659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fst9r\" (UniqueName: \"kubernetes.io/projected/5d87e41c-e89d-4b52-83b7-79d77bee80d9-kube-api-access-fst9r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:09 crc kubenswrapper[4760]: I1125 08:50:09.938877 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:10 crc kubenswrapper[4760]: I1125 08:50:10.476226 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb"] Nov 25 08:50:10 crc kubenswrapper[4760]: I1125 08:50:10.527677 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" event={"ID":"5d87e41c-e89d-4b52-83b7-79d77bee80d9","Type":"ContainerStarted","Data":"04c1baa687b071ffe5dffb69f94b18e2cf55c5e2313d41ebe4ffaba47c78d891"} Nov 25 08:50:11 crc kubenswrapper[4760]: I1125 08:50:11.537897 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" event={"ID":"5d87e41c-e89d-4b52-83b7-79d77bee80d9","Type":"ContainerStarted","Data":"4b449ed8d798e6fa00ac7c4f2b7de122e012b14aa8a4de1f5279ef80b48e6a2b"} Nov 25 08:50:11 crc kubenswrapper[4760]: I1125 08:50:11.559155 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" podStartSLOduration=1.747654185 podStartE2EDuration="2.559136568s" podCreationTimestamp="2025-11-25 08:50:09 +0000 UTC" firstStartedPulling="2025-11-25 08:50:10.479705343 +0000 UTC m=+2344.188736138" lastFinishedPulling="2025-11-25 08:50:11.291187726 +0000 UTC m=+2345.000218521" observedRunningTime="2025-11-25 08:50:11.554087533 +0000 UTC m=+2345.263118338" watchObservedRunningTime="2025-11-25 08:50:11.559136568 +0000 UTC m=+2345.268167363" Nov 25 08:50:16 crc kubenswrapper[4760]: I1125 08:50:16.582410 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d87e41c-e89d-4b52-83b7-79d77bee80d9" containerID="4b449ed8d798e6fa00ac7c4f2b7de122e012b14aa8a4de1f5279ef80b48e6a2b" exitCode=0 Nov 25 08:50:16 crc kubenswrapper[4760]: I1125 08:50:16.582490 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" event={"ID":"5d87e41c-e89d-4b52-83b7-79d77bee80d9","Type":"ContainerDied","Data":"4b449ed8d798e6fa00ac7c4f2b7de122e012b14aa8a4de1f5279ef80b48e6a2b"} Nov 25 08:50:17 crc kubenswrapper[4760]: I1125 08:50:17.962231 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.064862 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ceph\") pod \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.064982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-inventory\") pod \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.065069 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fst9r\" (UniqueName: \"kubernetes.io/projected/5d87e41c-e89d-4b52-83b7-79d77bee80d9-kube-api-access-fst9r\") pod \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.065163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ssh-key\") pod \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\" (UID: \"5d87e41c-e89d-4b52-83b7-79d77bee80d9\") " Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.070337 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ceph" (OuterVolumeSpecName: "ceph") pod "5d87e41c-e89d-4b52-83b7-79d77bee80d9" (UID: "5d87e41c-e89d-4b52-83b7-79d77bee80d9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.070993 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d87e41c-e89d-4b52-83b7-79d77bee80d9-kube-api-access-fst9r" (OuterVolumeSpecName: "kube-api-access-fst9r") pod "5d87e41c-e89d-4b52-83b7-79d77bee80d9" (UID: "5d87e41c-e89d-4b52-83b7-79d77bee80d9"). InnerVolumeSpecName "kube-api-access-fst9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.094614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-inventory" (OuterVolumeSpecName: "inventory") pod "5d87e41c-e89d-4b52-83b7-79d77bee80d9" (UID: "5d87e41c-e89d-4b52-83b7-79d77bee80d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.096420 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5d87e41c-e89d-4b52-83b7-79d77bee80d9" (UID: "5d87e41c-e89d-4b52-83b7-79d77bee80d9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.167849 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.167877 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.167889 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d87e41c-e89d-4b52-83b7-79d77bee80d9-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.167901 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fst9r\" (UniqueName: \"kubernetes.io/projected/5d87e41c-e89d-4b52-83b7-79d77bee80d9-kube-api-access-fst9r\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.601727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" event={"ID":"5d87e41c-e89d-4b52-83b7-79d77bee80d9","Type":"ContainerDied","Data":"04c1baa687b071ffe5dffb69f94b18e2cf55c5e2313d41ebe4ffaba47c78d891"} Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.601762 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04c1baa687b071ffe5dffb69f94b18e2cf55c5e2313d41ebe4ffaba47c78d891" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.601793 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.665521 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v"] Nov 25 08:50:18 crc kubenswrapper[4760]: E1125 08:50:18.665924 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d87e41c-e89d-4b52-83b7-79d77bee80d9" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.665947 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d87e41c-e89d-4b52-83b7-79d77bee80d9" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.666099 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d87e41c-e89d-4b52-83b7-79d77bee80d9" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.666910 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.676752 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.677095 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.677230 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.677668 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.677930 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.679416 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.699294 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v"] Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.779329 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.779430 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.779463 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.779500 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.779535 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghwn7\" (UniqueName: \"kubernetes.io/projected/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-kube-api-access-ghwn7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.779613 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.881096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.881148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.881195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.881233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghwn7\" (UniqueName: \"kubernetes.io/projected/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-kube-api-access-ghwn7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.881332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.881417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.882661 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.885288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.885840 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.887669 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.888310 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.899485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghwn7\" (UniqueName: \"kubernetes.io/projected/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-kube-api-access-ghwn7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-kjm4v\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:18 crc kubenswrapper[4760]: I1125 08:50:18.993592 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:50:19 crc kubenswrapper[4760]: I1125 08:50:19.466192 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v"] Nov 25 08:50:19 crc kubenswrapper[4760]: I1125 08:50:19.609916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" event={"ID":"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f","Type":"ContainerStarted","Data":"027527eb3350ce0ae817bea1410cbe6e604f9448c833c5a13d20cda6abef01ee"} Nov 25 08:50:20 crc kubenswrapper[4760]: I1125 08:50:20.618704 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" event={"ID":"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f","Type":"ContainerStarted","Data":"d149cbc31453fa1744489330b2d6460980a44042303a2dcaecf2f780ec0ad30d"} Nov 25 08:50:20 crc kubenswrapper[4760]: I1125 08:50:20.648144 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" podStartSLOduration=2.11659813 podStartE2EDuration="2.648119397s" podCreationTimestamp="2025-11-25 08:50:18 +0000 UTC" firstStartedPulling="2025-11-25 08:50:19.467007509 +0000 UTC m=+2353.176038304" lastFinishedPulling="2025-11-25 08:50:19.998528776 +0000 UTC m=+2353.707559571" observedRunningTime="2025-11-25 08:50:20.640446996 +0000 UTC m=+2354.349477801" watchObservedRunningTime="2025-11-25 08:50:20.648119397 +0000 UTC m=+2354.357150212" Nov 25 08:50:31 crc kubenswrapper[4760]: I1125 08:50:31.746831 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:50:31 crc kubenswrapper[4760]: I1125 08:50:31.747800 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.746314 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.747036 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.747110 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.748092 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.748162 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" gracePeriod=600 Nov 25 08:51:01 crc kubenswrapper[4760]: E1125 08:51:01.876220 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.952586 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" exitCode=0 Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.952624 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72"} Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.952652 4760 scope.go:117] "RemoveContainer" containerID="a867ce918c353a52f7d744d4ae5764d73a3af9c88d9c5804bb0260064416eb30" Nov 25 08:51:01 crc kubenswrapper[4760]: I1125 08:51:01.953401 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:51:01 crc kubenswrapper[4760]: E1125 08:51:01.953668 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:51:12 crc kubenswrapper[4760]: I1125 08:51:12.939353 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:51:12 crc kubenswrapper[4760]: E1125 08:51:12.940288 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:51:24 crc kubenswrapper[4760]: I1125 08:51:24.155959 4760 generic.go:334] "Generic (PLEG): container finished" podID="eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" containerID="d149cbc31453fa1744489330b2d6460980a44042303a2dcaecf2f780ec0ad30d" exitCode=0 Nov 25 08:51:24 crc kubenswrapper[4760]: I1125 08:51:24.156043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" event={"ID":"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f","Type":"ContainerDied","Data":"d149cbc31453fa1744489330b2d6460980a44042303a2dcaecf2f780ec0ad30d"} Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.588640 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.782153 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovn-combined-ca-bundle\") pod \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.782214 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-inventory\") pod \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.782235 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovncontroller-config-0\") pod \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.782364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ssh-key\") pod \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.782396 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghwn7\" (UniqueName: \"kubernetes.io/projected/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-kube-api-access-ghwn7\") pod \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.782440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ceph\") pod \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\" (UID: \"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f\") " Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.789795 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" (UID: "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.792359 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ceph" (OuterVolumeSpecName: "ceph") pod "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" (UID: "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.797012 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-kube-api-access-ghwn7" (OuterVolumeSpecName: "kube-api-access-ghwn7") pod "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" (UID: "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f"). InnerVolumeSpecName "kube-api-access-ghwn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.812883 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" (UID: "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.816017 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-inventory" (OuterVolumeSpecName: "inventory") pod "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" (UID: "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.822916 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" (UID: "eaf0aab3-fbd3-4389-ab45-8bd1c834f48f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.885192 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.885239 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.885266 4760 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.885278 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.885291 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghwn7\" (UniqueName: \"kubernetes.io/projected/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-kube-api-access-ghwn7\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:25 crc kubenswrapper[4760]: I1125 08:51:25.885302 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eaf0aab3-fbd3-4389-ab45-8bd1c834f48f-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.173385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" event={"ID":"eaf0aab3-fbd3-4389-ab45-8bd1c834f48f","Type":"ContainerDied","Data":"027527eb3350ce0ae817bea1410cbe6e604f9448c833c5a13d20cda6abef01ee"} Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.173415 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-kjm4v" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.173439 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027527eb3350ce0ae817bea1410cbe6e604f9448c833c5a13d20cda6abef01ee" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.266762 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827"] Nov 25 08:51:26 crc kubenswrapper[4760]: E1125 08:51:26.267155 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.267178 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.267363 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf0aab3-fbd3-4389-ab45-8bd1c834f48f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.267996 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.270970 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.271529 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.271574 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.271533 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.271535 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.272452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.273746 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.280847 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827"] Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.394672 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.394943 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.395050 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.395109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxqwq\" (UniqueName: \"kubernetes.io/projected/01b4af7c-f553-48d7-9166-856497bbe664-kube-api-access-fxqwq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.395237 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.395301 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.395388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.497410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.497458 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.497532 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.497570 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.497596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.497708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.497759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxqwq\" (UniqueName: \"kubernetes.io/projected/01b4af7c-f553-48d7-9166-856497bbe664-kube-api-access-fxqwq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.501612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.501937 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.502657 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.504185 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.506518 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.507800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.521554 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxqwq\" (UniqueName: \"kubernetes.io/projected/01b4af7c-f553-48d7-9166-856497bbe664-kube-api-access-fxqwq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.585768 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:51:26 crc kubenswrapper[4760]: I1125 08:51:26.946767 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:51:26 crc kubenswrapper[4760]: E1125 08:51:26.947325 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:51:27 crc kubenswrapper[4760]: I1125 08:51:27.145918 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827"] Nov 25 08:51:27 crc kubenswrapper[4760]: I1125 08:51:27.182124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" event={"ID":"01b4af7c-f553-48d7-9166-856497bbe664","Type":"ContainerStarted","Data":"309e7db8aa9dadce0edd4533946436a88a5ec489c1a76650381b49b1ee04741b"} Nov 25 08:51:28 crc kubenswrapper[4760]: I1125 08:51:28.195453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" event={"ID":"01b4af7c-f553-48d7-9166-856497bbe664","Type":"ContainerStarted","Data":"6486e949f1c9dfeec7abf5ee9dcb9cf296dea757b0a5f33feb706cb83cb82726"} Nov 25 08:51:38 crc kubenswrapper[4760]: I1125 08:51:38.938971 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:51:38 crc kubenswrapper[4760]: E1125 08:51:38.939748 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:51:51 crc kubenswrapper[4760]: I1125 08:51:51.938419 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:51:51 crc kubenswrapper[4760]: E1125 08:51:51.939416 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:52:05 crc kubenswrapper[4760]: I1125 08:52:05.938085 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:52:05 crc kubenswrapper[4760]: E1125 08:52:05.938850 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:52:16 crc kubenswrapper[4760]: I1125 08:52:16.943947 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:52:16 crc kubenswrapper[4760]: E1125 08:52:16.944692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:52:23 crc kubenswrapper[4760]: I1125 08:52:23.678372 4760 generic.go:334] "Generic (PLEG): container finished" podID="01b4af7c-f553-48d7-9166-856497bbe664" containerID="6486e949f1c9dfeec7abf5ee9dcb9cf296dea757b0a5f33feb706cb83cb82726" exitCode=0 Nov 25 08:52:23 crc kubenswrapper[4760]: I1125 08:52:23.678489 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" event={"ID":"01b4af7c-f553-48d7-9166-856497bbe664","Type":"ContainerDied","Data":"6486e949f1c9dfeec7abf5ee9dcb9cf296dea757b0a5f33feb706cb83cb82726"} Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.136652 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.195182 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ssh-key\") pod \"01b4af7c-f553-48d7-9166-856497bbe664\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.195259 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-ovn-metadata-agent-neutron-config-0\") pod \"01b4af7c-f553-48d7-9166-856497bbe664\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.195308 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxqwq\" (UniqueName: \"kubernetes.io/projected/01b4af7c-f553-48d7-9166-856497bbe664-kube-api-access-fxqwq\") pod \"01b4af7c-f553-48d7-9166-856497bbe664\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.195339 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-nova-metadata-neutron-config-0\") pod \"01b4af7c-f553-48d7-9166-856497bbe664\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.195385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-metadata-combined-ca-bundle\") pod \"01b4af7c-f553-48d7-9166-856497bbe664\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.202506 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "01b4af7c-f553-48d7-9166-856497bbe664" (UID: "01b4af7c-f553-48d7-9166-856497bbe664"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.202585 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b4af7c-f553-48d7-9166-856497bbe664-kube-api-access-fxqwq" (OuterVolumeSpecName: "kube-api-access-fxqwq") pod "01b4af7c-f553-48d7-9166-856497bbe664" (UID: "01b4af7c-f553-48d7-9166-856497bbe664"). InnerVolumeSpecName "kube-api-access-fxqwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.223050 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "01b4af7c-f553-48d7-9166-856497bbe664" (UID: "01b4af7c-f553-48d7-9166-856497bbe664"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.230033 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "01b4af7c-f553-48d7-9166-856497bbe664" (UID: "01b4af7c-f553-48d7-9166-856497bbe664"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.231714 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01b4af7c-f553-48d7-9166-856497bbe664" (UID: "01b4af7c-f553-48d7-9166-856497bbe664"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.297241 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-inventory\") pod \"01b4af7c-f553-48d7-9166-856497bbe664\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.297464 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ceph\") pod \"01b4af7c-f553-48d7-9166-856497bbe664\" (UID: \"01b4af7c-f553-48d7-9166-856497bbe664\") " Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.298192 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.298219 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.298235 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.298264 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.298279 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxqwq\" (UniqueName: \"kubernetes.io/projected/01b4af7c-f553-48d7-9166-856497bbe664-kube-api-access-fxqwq\") on node \"crc\" DevicePath \"\"" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.302816 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ceph" (OuterVolumeSpecName: "ceph") pod "01b4af7c-f553-48d7-9166-856497bbe664" (UID: "01b4af7c-f553-48d7-9166-856497bbe664"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.325854 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-inventory" (OuterVolumeSpecName: "inventory") pod "01b4af7c-f553-48d7-9166-856497bbe664" (UID: "01b4af7c-f553-48d7-9166-856497bbe664"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.399923 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.400001 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01b4af7c-f553-48d7-9166-856497bbe664-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.699873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" event={"ID":"01b4af7c-f553-48d7-9166-856497bbe664","Type":"ContainerDied","Data":"309e7db8aa9dadce0edd4533946436a88a5ec489c1a76650381b49b1ee04741b"} Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.699912 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="309e7db8aa9dadce0edd4533946436a88a5ec489c1a76650381b49b1ee04741b" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.699961 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.806209 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs"] Nov 25 08:52:25 crc kubenswrapper[4760]: E1125 08:52:25.806788 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b4af7c-f553-48d7-9166-856497bbe664" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.806831 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b4af7c-f553-48d7-9166-856497bbe664" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.807060 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b4af7c-f553-48d7-9166-856497bbe664" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.807841 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.810060 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.810365 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.810382 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.810956 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.811673 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.813066 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.853566 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs"] Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.908360 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.908447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.908536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2j8\" (UniqueName: \"kubernetes.io/projected/2d913348-cf44-4539-b090-181ea0720a33-kube-api-access-rt2j8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.908606 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.908784 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:25 crc kubenswrapper[4760]: I1125 08:52:25.908820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.010215 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt2j8\" (UniqueName: \"kubernetes.io/projected/2d913348-cf44-4539-b090-181ea0720a33-kube-api-access-rt2j8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.010300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.010430 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.011035 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.011378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.011405 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.014541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.014650 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.015718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.015737 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.016091 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.039161 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt2j8\" (UniqueName: \"kubernetes.io/projected/2d913348-cf44-4539-b090-181ea0720a33-kube-api-access-rt2j8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.166736 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.673438 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs"] Nov 25 08:52:26 crc kubenswrapper[4760]: I1125 08:52:26.708543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" event={"ID":"2d913348-cf44-4539-b090-181ea0720a33","Type":"ContainerStarted","Data":"dce2718deae2e28fb0e835a43912366be40f29b67b93c4b0d0d2bee847a97562"} Nov 25 08:52:28 crc kubenswrapper[4760]: I1125 08:52:28.731534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" event={"ID":"2d913348-cf44-4539-b090-181ea0720a33","Type":"ContainerStarted","Data":"46ca914b45881b4f3e9c847b2ad929e7c85b8081e6f30abfdbda0d459a523a3c"} Nov 25 08:52:28 crc kubenswrapper[4760]: I1125 08:52:28.751207 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" podStartSLOduration=2.928149917 podStartE2EDuration="3.751187291s" podCreationTimestamp="2025-11-25 08:52:25 +0000 UTC" firstStartedPulling="2025-11-25 08:52:26.682410483 +0000 UTC m=+2480.391441268" lastFinishedPulling="2025-11-25 08:52:27.505447827 +0000 UTC m=+2481.214478642" observedRunningTime="2025-11-25 08:52:28.748080285 +0000 UTC m=+2482.457111080" watchObservedRunningTime="2025-11-25 08:52:28.751187291 +0000 UTC m=+2482.460218086" Nov 25 08:52:30 crc kubenswrapper[4760]: I1125 08:52:30.939767 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:52:30 crc kubenswrapper[4760]: E1125 08:52:30.940326 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:52:41 crc kubenswrapper[4760]: I1125 08:52:41.938838 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:52:41 crc kubenswrapper[4760]: E1125 08:52:41.939593 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:52:56 crc kubenswrapper[4760]: I1125 08:52:56.944023 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:52:56 crc kubenswrapper[4760]: E1125 08:52:56.944666 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:53:08 crc kubenswrapper[4760]: I1125 08:53:08.939544 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:53:08 crc kubenswrapper[4760]: E1125 08:53:08.940802 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:53:20 crc kubenswrapper[4760]: I1125 08:53:20.938789 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:53:20 crc kubenswrapper[4760]: E1125 08:53:20.939667 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:53:34 crc kubenswrapper[4760]: I1125 08:53:34.938296 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:53:34 crc kubenswrapper[4760]: E1125 08:53:34.939297 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:53:48 crc kubenswrapper[4760]: I1125 08:53:48.938423 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:53:48 crc kubenswrapper[4760]: E1125 08:53:48.939154 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:53:59 crc kubenswrapper[4760]: I1125 08:53:59.938499 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:53:59 crc kubenswrapper[4760]: E1125 08:53:59.939391 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:54:13 crc kubenswrapper[4760]: I1125 08:54:13.938957 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:54:13 crc kubenswrapper[4760]: E1125 08:54:13.939787 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:54:24 crc kubenswrapper[4760]: I1125 08:54:24.938713 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:54:24 crc kubenswrapper[4760]: E1125 08:54:24.940699 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:54:37 crc kubenswrapper[4760]: I1125 08:54:37.938500 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:54:37 crc kubenswrapper[4760]: E1125 08:54:37.939453 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:54:51 crc kubenswrapper[4760]: I1125 08:54:51.938892 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:54:51 crc kubenswrapper[4760]: E1125 08:54:51.940721 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:55:06 crc kubenswrapper[4760]: I1125 08:55:06.943529 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:55:06 crc kubenswrapper[4760]: E1125 08:55:06.944238 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:55:18 crc kubenswrapper[4760]: I1125 08:55:18.938775 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:55:18 crc kubenswrapper[4760]: E1125 08:55:18.939628 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.379573 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6l68r"] Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.382063 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.396442 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6l68r"] Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.577263 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-utilities\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.577323 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-catalog-content\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.577520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqkq7\" (UniqueName: \"kubernetes.io/projected/565abc60-cbc5-4c8f-828b-418e55415e72-kube-api-access-bqkq7\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.679453 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-utilities\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.679530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-catalog-content\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.679604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqkq7\" (UniqueName: \"kubernetes.io/projected/565abc60-cbc5-4c8f-828b-418e55415e72-kube-api-access-bqkq7\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.680079 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-catalog-content\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.680368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-utilities\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.705088 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqkq7\" (UniqueName: \"kubernetes.io/projected/565abc60-cbc5-4c8f-828b-418e55415e72-kube-api-access-bqkq7\") pod \"community-operators-6l68r\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:32 crc kubenswrapper[4760]: I1125 08:55:32.707301 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:33 crc kubenswrapper[4760]: I1125 08:55:33.205304 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6l68r"] Nov 25 08:55:33 crc kubenswrapper[4760]: I1125 08:55:33.282434 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l68r" event={"ID":"565abc60-cbc5-4c8f-828b-418e55415e72","Type":"ContainerStarted","Data":"ea01889f5a32c7749755241747a88cf30bb3544662d913c2cfc8d7cbb9102f9c"} Nov 25 08:55:33 crc kubenswrapper[4760]: I1125 08:55:33.939056 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:55:33 crc kubenswrapper[4760]: E1125 08:55:33.939756 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:55:34 crc kubenswrapper[4760]: I1125 08:55:34.291543 4760 generic.go:334] "Generic (PLEG): container finished" podID="565abc60-cbc5-4c8f-828b-418e55415e72" containerID="40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a" exitCode=0 Nov 25 08:55:34 crc kubenswrapper[4760]: I1125 08:55:34.291587 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l68r" event={"ID":"565abc60-cbc5-4c8f-828b-418e55415e72","Type":"ContainerDied","Data":"40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a"} Nov 25 08:55:34 crc kubenswrapper[4760]: I1125 08:55:34.294613 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:55:36 crc kubenswrapper[4760]: I1125 08:55:36.309202 4760 generic.go:334] "Generic (PLEG): container finished" podID="565abc60-cbc5-4c8f-828b-418e55415e72" containerID="09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6" exitCode=0 Nov 25 08:55:36 crc kubenswrapper[4760]: I1125 08:55:36.309373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l68r" event={"ID":"565abc60-cbc5-4c8f-828b-418e55415e72","Type":"ContainerDied","Data":"09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6"} Nov 25 08:55:37 crc kubenswrapper[4760]: I1125 08:55:37.320952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l68r" event={"ID":"565abc60-cbc5-4c8f-828b-418e55415e72","Type":"ContainerStarted","Data":"d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e"} Nov 25 08:55:42 crc kubenswrapper[4760]: I1125 08:55:42.707622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:42 crc kubenswrapper[4760]: I1125 08:55:42.708198 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:42 crc kubenswrapper[4760]: I1125 08:55:42.756261 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:42 crc kubenswrapper[4760]: I1125 08:55:42.781449 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6l68r" podStartSLOduration=8.317203004 podStartE2EDuration="10.781422908s" podCreationTimestamp="2025-11-25 08:55:32 +0000 UTC" firstStartedPulling="2025-11-25 08:55:34.294345682 +0000 UTC m=+2668.003376477" lastFinishedPulling="2025-11-25 08:55:36.758565586 +0000 UTC m=+2670.467596381" observedRunningTime="2025-11-25 08:55:37.341378953 +0000 UTC m=+2671.050409758" watchObservedRunningTime="2025-11-25 08:55:42.781422908 +0000 UTC m=+2676.490453723" Nov 25 08:55:43 crc kubenswrapper[4760]: I1125 08:55:43.424073 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:43 crc kubenswrapper[4760]: I1125 08:55:43.493283 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6l68r"] Nov 25 08:55:45 crc kubenswrapper[4760]: I1125 08:55:45.389996 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6l68r" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="registry-server" containerID="cri-o://d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e" gracePeriod=2 Nov 25 08:55:45 crc kubenswrapper[4760]: I1125 08:55:45.850723 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.011544 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqkq7\" (UniqueName: \"kubernetes.io/projected/565abc60-cbc5-4c8f-828b-418e55415e72-kube-api-access-bqkq7\") pod \"565abc60-cbc5-4c8f-828b-418e55415e72\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.011617 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-utilities\") pod \"565abc60-cbc5-4c8f-828b-418e55415e72\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.011866 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-catalog-content\") pod \"565abc60-cbc5-4c8f-828b-418e55415e72\" (UID: \"565abc60-cbc5-4c8f-828b-418e55415e72\") " Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.013227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-utilities" (OuterVolumeSpecName: "utilities") pod "565abc60-cbc5-4c8f-828b-418e55415e72" (UID: "565abc60-cbc5-4c8f-828b-418e55415e72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.023553 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565abc60-cbc5-4c8f-828b-418e55415e72-kube-api-access-bqkq7" (OuterVolumeSpecName: "kube-api-access-bqkq7") pod "565abc60-cbc5-4c8f-828b-418e55415e72" (UID: "565abc60-cbc5-4c8f-828b-418e55415e72"). InnerVolumeSpecName "kube-api-access-bqkq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.090318 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "565abc60-cbc5-4c8f-828b-418e55415e72" (UID: "565abc60-cbc5-4c8f-828b-418e55415e72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.114505 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.114546 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565abc60-cbc5-4c8f-828b-418e55415e72-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.114560 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqkq7\" (UniqueName: \"kubernetes.io/projected/565abc60-cbc5-4c8f-828b-418e55415e72-kube-api-access-bqkq7\") on node \"crc\" DevicePath \"\"" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.401181 4760 generic.go:334] "Generic (PLEG): container finished" podID="565abc60-cbc5-4c8f-828b-418e55415e72" containerID="d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e" exitCode=0 Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.401224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l68r" event={"ID":"565abc60-cbc5-4c8f-828b-418e55415e72","Type":"ContainerDied","Data":"d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e"} Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.401238 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6l68r" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.401309 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6l68r" event={"ID":"565abc60-cbc5-4c8f-828b-418e55415e72","Type":"ContainerDied","Data":"ea01889f5a32c7749755241747a88cf30bb3544662d913c2cfc8d7cbb9102f9c"} Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.401336 4760 scope.go:117] "RemoveContainer" containerID="d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.424261 4760 scope.go:117] "RemoveContainer" containerID="09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.436899 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6l68r"] Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.444656 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6l68r"] Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.453933 4760 scope.go:117] "RemoveContainer" containerID="40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.495958 4760 scope.go:117] "RemoveContainer" containerID="d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e" Nov 25 08:55:46 crc kubenswrapper[4760]: E1125 08:55:46.496515 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e\": container with ID starting with d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e not found: ID does not exist" containerID="d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.496555 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e"} err="failed to get container status \"d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e\": rpc error: code = NotFound desc = could not find container \"d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e\": container with ID starting with d5c8124290bb0c2a28c2f30eed613fa8147b22b2a3aa2ea0308be4ff4c3d6b3e not found: ID does not exist" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.496587 4760 scope.go:117] "RemoveContainer" containerID="09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6" Nov 25 08:55:46 crc kubenswrapper[4760]: E1125 08:55:46.497157 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6\": container with ID starting with 09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6 not found: ID does not exist" containerID="09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.497197 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6"} err="failed to get container status \"09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6\": rpc error: code = NotFound desc = could not find container \"09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6\": container with ID starting with 09958d40b91237d3f30f55564c4d99a2a588869212a618c097924ad8a844ece6 not found: ID does not exist" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.497216 4760 scope.go:117] "RemoveContainer" containerID="40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a" Nov 25 08:55:46 crc kubenswrapper[4760]: E1125 08:55:46.497524 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a\": container with ID starting with 40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a not found: ID does not exist" containerID="40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.497557 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a"} err="failed to get container status \"40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a\": rpc error: code = NotFound desc = could not find container \"40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a\": container with ID starting with 40261ea96c0fb53a7ea6dcd03ef7a9dd7c08ea3e5558f466eeafcebd37c86c1a not found: ID does not exist" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.939106 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:55:46 crc kubenswrapper[4760]: E1125 08:55:46.939748 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:55:46 crc kubenswrapper[4760]: I1125 08:55:46.949801 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" path="/var/lib/kubelet/pods/565abc60-cbc5-4c8f-828b-418e55415e72/volumes" Nov 25 08:55:57 crc kubenswrapper[4760]: I1125 08:55:57.939897 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:55:57 crc kubenswrapper[4760]: E1125 08:55:57.941238 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 08:56:10 crc kubenswrapper[4760]: I1125 08:56:10.939562 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:56:11 crc kubenswrapper[4760]: I1125 08:56:11.602231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"13e9ce4d6ea90c9d403df75bea2e9a8044a9729da91e45cf4a3c2a094df970e2"} Nov 25 08:56:36 crc kubenswrapper[4760]: I1125 08:56:36.840697 4760 generic.go:334] "Generic (PLEG): container finished" podID="2d913348-cf44-4539-b090-181ea0720a33" containerID="46ca914b45881b4f3e9c847b2ad929e7c85b8081e6f30abfdbda0d459a523a3c" exitCode=0 Nov 25 08:56:36 crc kubenswrapper[4760]: I1125 08:56:36.841319 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" event={"ID":"2d913348-cf44-4539-b090-181ea0720a33","Type":"ContainerDied","Data":"46ca914b45881b4f3e9c847b2ad929e7c85b8081e6f30abfdbda0d459a523a3c"} Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.263468 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.368083 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-inventory\") pod \"2d913348-cf44-4539-b090-181ea0720a33\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.368147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt2j8\" (UniqueName: \"kubernetes.io/projected/2d913348-cf44-4539-b090-181ea0720a33-kube-api-access-rt2j8\") pod \"2d913348-cf44-4539-b090-181ea0720a33\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.368225 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-secret-0\") pod \"2d913348-cf44-4539-b090-181ea0720a33\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.368312 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ssh-key\") pod \"2d913348-cf44-4539-b090-181ea0720a33\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.368390 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-combined-ca-bundle\") pod \"2d913348-cf44-4539-b090-181ea0720a33\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.368441 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ceph\") pod \"2d913348-cf44-4539-b090-181ea0720a33\" (UID: \"2d913348-cf44-4539-b090-181ea0720a33\") " Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.374712 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ceph" (OuterVolumeSpecName: "ceph") pod "2d913348-cf44-4539-b090-181ea0720a33" (UID: "2d913348-cf44-4539-b090-181ea0720a33"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.374736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2d913348-cf44-4539-b090-181ea0720a33" (UID: "2d913348-cf44-4539-b090-181ea0720a33"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.381466 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d913348-cf44-4539-b090-181ea0720a33-kube-api-access-rt2j8" (OuterVolumeSpecName: "kube-api-access-rt2j8") pod "2d913348-cf44-4539-b090-181ea0720a33" (UID: "2d913348-cf44-4539-b090-181ea0720a33"). InnerVolumeSpecName "kube-api-access-rt2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.394475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2d913348-cf44-4539-b090-181ea0720a33" (UID: "2d913348-cf44-4539-b090-181ea0720a33"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.397625 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2d913348-cf44-4539-b090-181ea0720a33" (UID: "2d913348-cf44-4539-b090-181ea0720a33"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.401910 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-inventory" (OuterVolumeSpecName: "inventory") pod "2d913348-cf44-4539-b090-181ea0720a33" (UID: "2d913348-cf44-4539-b090-181ea0720a33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.470644 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.470701 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt2j8\" (UniqueName: \"kubernetes.io/projected/2d913348-cf44-4539-b090-181ea0720a33-kube-api-access-rt2j8\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.470716 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.470726 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.470736 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.470747 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2d913348-cf44-4539-b090-181ea0720a33-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.862678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" event={"ID":"2d913348-cf44-4539-b090-181ea0720a33","Type":"ContainerDied","Data":"dce2718deae2e28fb0e835a43912366be40f29b67b93c4b0d0d2bee847a97562"} Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.862729 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.862752 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dce2718deae2e28fb0e835a43912366be40f29b67b93c4b0d0d2bee847a97562" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.976060 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp"] Nov 25 08:56:38 crc kubenswrapper[4760]: E1125 08:56:38.977082 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="extract-utilities" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.977181 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="extract-utilities" Nov 25 08:56:38 crc kubenswrapper[4760]: E1125 08:56:38.977371 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="extract-content" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.977465 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="extract-content" Nov 25 08:56:38 crc kubenswrapper[4760]: E1125 08:56:38.977548 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="registry-server" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.977613 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="registry-server" Nov 25 08:56:38 crc kubenswrapper[4760]: E1125 08:56:38.977693 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d913348-cf44-4539-b090-181ea0720a33" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.977773 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d913348-cf44-4539-b090-181ea0720a33" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.978058 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d913348-cf44-4539-b090-181ea0720a33" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.978163 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="565abc60-cbc5-4c8f-828b-418e55415e72" containerName="registry-server" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.979112 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.984756 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.984818 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.984845 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.984934 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w2r28" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.985000 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.985115 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.985185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.985209 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.986345 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp"] Nov 25 08:56:38 crc kubenswrapper[4760]: I1125 08:56:38.987088 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095170 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095268 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095754 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095890 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7wj\" (UniqueName: \"kubernetes.io/projected/515be97b-ca6d-43a0-b8a1-471a782240bc-kube-api-access-vp7wj\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.095928 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197178 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197270 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197336 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7wj\" (UniqueName: \"kubernetes.io/projected/515be97b-ca6d-43a0-b8a1-471a782240bc-kube-api-access-vp7wj\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.197414 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.198008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.198068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.198127 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.198489 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.198513 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.201538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.201898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.202117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.202318 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.202936 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.204218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.208059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.212904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.216062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7wj\" (UniqueName: \"kubernetes.io/projected/515be97b-ca6d-43a0-b8a1-471a782240bc-kube-api-access-vp7wj\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.309601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.802328 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp"] Nov 25 08:56:39 crc kubenswrapper[4760]: I1125 08:56:39.872120 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" event={"ID":"515be97b-ca6d-43a0-b8a1-471a782240bc","Type":"ContainerStarted","Data":"0fe47840112cf413f620cdb60258a588dce93abb990efe5f6739e05905d8c3b1"} Nov 25 08:56:40 crc kubenswrapper[4760]: I1125 08:56:40.884308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" event={"ID":"515be97b-ca6d-43a0-b8a1-471a782240bc","Type":"ContainerStarted","Data":"bf80431685de0baebd39d971f3b5309909ada8f5af3bdc9ff7de02748700d9fb"} Nov 25 08:56:40 crc kubenswrapper[4760]: I1125 08:56:40.906959 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" podStartSLOduration=2.420040119 podStartE2EDuration="2.906938801s" podCreationTimestamp="2025-11-25 08:56:38 +0000 UTC" firstStartedPulling="2025-11-25 08:56:39.809061378 +0000 UTC m=+2733.518092173" lastFinishedPulling="2025-11-25 08:56:40.29596007 +0000 UTC m=+2734.004990855" observedRunningTime="2025-11-25 08:56:40.899574911 +0000 UTC m=+2734.608605726" watchObservedRunningTime="2025-11-25 08:56:40.906938801 +0000 UTC m=+2734.615969596" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.275327 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qqr25"] Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.279695 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.291559 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqr25"] Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.405729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-utilities\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.405809 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ngf\" (UniqueName: \"kubernetes.io/projected/d0263e68-c70d-422b-a0d2-314217257caf-kube-api-access-z8ngf\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.406441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-catalog-content\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.508053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-utilities\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.508108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ngf\" (UniqueName: \"kubernetes.io/projected/d0263e68-c70d-422b-a0d2-314217257caf-kube-api-access-z8ngf\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.508179 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-catalog-content\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.508650 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-utilities\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.508721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-catalog-content\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.534299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ngf\" (UniqueName: \"kubernetes.io/projected/d0263e68-c70d-422b-a0d2-314217257caf-kube-api-access-z8ngf\") pod \"redhat-marketplace-qqr25\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:36 crc kubenswrapper[4760]: I1125 08:57:36.609772 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:37 crc kubenswrapper[4760]: I1125 08:57:37.052766 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqr25"] Nov 25 08:57:37 crc kubenswrapper[4760]: I1125 08:57:37.376951 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0263e68-c70d-422b-a0d2-314217257caf" containerID="084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab" exitCode=0 Nov 25 08:57:37 crc kubenswrapper[4760]: I1125 08:57:37.377033 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqr25" event={"ID":"d0263e68-c70d-422b-a0d2-314217257caf","Type":"ContainerDied","Data":"084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab"} Nov 25 08:57:37 crc kubenswrapper[4760]: I1125 08:57:37.377271 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqr25" event={"ID":"d0263e68-c70d-422b-a0d2-314217257caf","Type":"ContainerStarted","Data":"645f998161aff6e3803670e72bc1e17356c33d5017dc51b16e78573dbd2b6bee"} Nov 25 08:57:38 crc kubenswrapper[4760]: I1125 08:57:38.387241 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0263e68-c70d-422b-a0d2-314217257caf" containerID="7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba" exitCode=0 Nov 25 08:57:38 crc kubenswrapper[4760]: I1125 08:57:38.387293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqr25" event={"ID":"d0263e68-c70d-422b-a0d2-314217257caf","Type":"ContainerDied","Data":"7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba"} Nov 25 08:57:39 crc kubenswrapper[4760]: I1125 08:57:39.399844 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqr25" event={"ID":"d0263e68-c70d-422b-a0d2-314217257caf","Type":"ContainerStarted","Data":"8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1"} Nov 25 08:57:39 crc kubenswrapper[4760]: I1125 08:57:39.430992 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qqr25" podStartSLOduration=1.690828139 podStartE2EDuration="3.430964185s" podCreationTimestamp="2025-11-25 08:57:36 +0000 UTC" firstStartedPulling="2025-11-25 08:57:37.37926785 +0000 UTC m=+2791.088298645" lastFinishedPulling="2025-11-25 08:57:39.119403886 +0000 UTC m=+2792.828434691" observedRunningTime="2025-11-25 08:57:39.417800559 +0000 UTC m=+2793.126831394" watchObservedRunningTime="2025-11-25 08:57:39.430964185 +0000 UTC m=+2793.139995020" Nov 25 08:57:46 crc kubenswrapper[4760]: I1125 08:57:46.610780 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:46 crc kubenswrapper[4760]: I1125 08:57:46.612001 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:46 crc kubenswrapper[4760]: I1125 08:57:46.653996 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:47 crc kubenswrapper[4760]: I1125 08:57:47.522172 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:47 crc kubenswrapper[4760]: I1125 08:57:47.568090 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqr25"] Nov 25 08:57:49 crc kubenswrapper[4760]: I1125 08:57:49.484766 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qqr25" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="registry-server" containerID="cri-o://8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1" gracePeriod=2 Nov 25 08:57:49 crc kubenswrapper[4760]: I1125 08:57:49.934338 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.064872 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-utilities\") pod \"d0263e68-c70d-422b-a0d2-314217257caf\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.064961 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ngf\" (UniqueName: \"kubernetes.io/projected/d0263e68-c70d-422b-a0d2-314217257caf-kube-api-access-z8ngf\") pod \"d0263e68-c70d-422b-a0d2-314217257caf\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.065040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-catalog-content\") pod \"d0263e68-c70d-422b-a0d2-314217257caf\" (UID: \"d0263e68-c70d-422b-a0d2-314217257caf\") " Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.065895 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-utilities" (OuterVolumeSpecName: "utilities") pod "d0263e68-c70d-422b-a0d2-314217257caf" (UID: "d0263e68-c70d-422b-a0d2-314217257caf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.071590 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0263e68-c70d-422b-a0d2-314217257caf-kube-api-access-z8ngf" (OuterVolumeSpecName: "kube-api-access-z8ngf") pod "d0263e68-c70d-422b-a0d2-314217257caf" (UID: "d0263e68-c70d-422b-a0d2-314217257caf"). InnerVolumeSpecName "kube-api-access-z8ngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.090886 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0263e68-c70d-422b-a0d2-314217257caf" (UID: "d0263e68-c70d-422b-a0d2-314217257caf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.168556 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.168606 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0263e68-c70d-422b-a0d2-314217257caf-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.168619 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ngf\" (UniqueName: \"kubernetes.io/projected/d0263e68-c70d-422b-a0d2-314217257caf-kube-api-access-z8ngf\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.498650 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0263e68-c70d-422b-a0d2-314217257caf" containerID="8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1" exitCode=0 Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.498697 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqr25" event={"ID":"d0263e68-c70d-422b-a0d2-314217257caf","Type":"ContainerDied","Data":"8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1"} Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.498734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqr25" event={"ID":"d0263e68-c70d-422b-a0d2-314217257caf","Type":"ContainerDied","Data":"645f998161aff6e3803670e72bc1e17356c33d5017dc51b16e78573dbd2b6bee"} Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.498736 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqr25" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.498755 4760 scope.go:117] "RemoveContainer" containerID="8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.532602 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqr25"] Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.538048 4760 scope.go:117] "RemoveContainer" containerID="7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.541868 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqr25"] Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.561219 4760 scope.go:117] "RemoveContainer" containerID="084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.603097 4760 scope.go:117] "RemoveContainer" containerID="8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1" Nov 25 08:57:50 crc kubenswrapper[4760]: E1125 08:57:50.603903 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1\": container with ID starting with 8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1 not found: ID does not exist" containerID="8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.603938 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1"} err="failed to get container status \"8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1\": rpc error: code = NotFound desc = could not find container \"8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1\": container with ID starting with 8c54ee27ea0c8f4100ae43a93177fee256c32ef667cb6cd758a064c319d525a1 not found: ID does not exist" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.603963 4760 scope.go:117] "RemoveContainer" containerID="7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba" Nov 25 08:57:50 crc kubenswrapper[4760]: E1125 08:57:50.604176 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba\": container with ID starting with 7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba not found: ID does not exist" containerID="7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.604193 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba"} err="failed to get container status \"7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba\": rpc error: code = NotFound desc = could not find container \"7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba\": container with ID starting with 7392a3e2aefe3a28b7fffa1b6714cc4160f43e37ab32bd45eb748456744918ba not found: ID does not exist" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.604206 4760 scope.go:117] "RemoveContainer" containerID="084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab" Nov 25 08:57:50 crc kubenswrapper[4760]: E1125 08:57:50.604400 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab\": container with ID starting with 084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab not found: ID does not exist" containerID="084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.604417 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab"} err="failed to get container status \"084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab\": rpc error: code = NotFound desc = could not find container \"084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab\": container with ID starting with 084a7f6b5c87a3a2bf7c62eb7b0fba3815d1147877686cddab2c72c91a1e2cab not found: ID does not exist" Nov 25 08:57:50 crc kubenswrapper[4760]: I1125 08:57:50.949381 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0263e68-c70d-422b-a0d2-314217257caf" path="/var/lib/kubelet/pods/d0263e68-c70d-422b-a0d2-314217257caf/volumes" Nov 25 08:58:31 crc kubenswrapper[4760]: I1125 08:58:31.746608 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:58:31 crc kubenswrapper[4760]: I1125 08:58:31.747122 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:59:01 crc kubenswrapper[4760]: I1125 08:59:01.746598 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:59:01 crc kubenswrapper[4760]: I1125 08:59:01.747228 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:59:27 crc kubenswrapper[4760]: I1125 08:59:27.299951 4760 generic.go:334] "Generic (PLEG): container finished" podID="515be97b-ca6d-43a0-b8a1-471a782240bc" containerID="bf80431685de0baebd39d971f3b5309909ada8f5af3bdc9ff7de02748700d9fb" exitCode=0 Nov 25 08:59:27 crc kubenswrapper[4760]: I1125 08:59:27.300046 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" event={"ID":"515be97b-ca6d-43a0-b8a1-471a782240bc","Type":"ContainerDied","Data":"bf80431685de0baebd39d971f3b5309909ada8f5af3bdc9ff7de02748700d9fb"} Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.690815 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.735996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp7wj\" (UniqueName: \"kubernetes.io/projected/515be97b-ca6d-43a0-b8a1-471a782240bc-kube-api-access-vp7wj\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736052 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-inventory\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ssh-key\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736165 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-extra-config-0\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736191 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-0\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736236 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-1\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736278 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph-nova-0\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736360 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-1\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736383 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-custom-ceph-combined-ca-bundle\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.736505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-0\") pod \"515be97b-ca6d-43a0-b8a1-471a782240bc\" (UID: \"515be97b-ca6d-43a0-b8a1-471a782240bc\") " Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.741742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515be97b-ca6d-43a0-b8a1-471a782240bc-kube-api-access-vp7wj" (OuterVolumeSpecName: "kube-api-access-vp7wj") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "kube-api-access-vp7wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.745927 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph" (OuterVolumeSpecName: "ceph") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.760615 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.775949 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.775970 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.778286 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.778718 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.779873 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.780605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-inventory" (OuterVolumeSpecName: "inventory") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.784908 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.790651 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "515be97b-ca6d-43a0-b8a1-471a782240bc" (UID: "515be97b-ca6d-43a0-b8a1-471a782240bc"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838788 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838826 4760 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838840 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838852 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp7wj\" (UniqueName: \"kubernetes.io/projected/515be97b-ca6d-43a0-b8a1-471a782240bc-kube-api-access-vp7wj\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838864 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838874 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838883 4760 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838894 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838903 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838914 4760 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:28 crc kubenswrapper[4760]: I1125 08:59:28.838924 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/515be97b-ca6d-43a0-b8a1-471a782240bc-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:29 crc kubenswrapper[4760]: I1125 08:59:29.317724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" event={"ID":"515be97b-ca6d-43a0-b8a1-471a782240bc","Type":"ContainerDied","Data":"0fe47840112cf413f620cdb60258a588dce93abb990efe5f6739e05905d8c3b1"} Nov 25 08:59:29 crc kubenswrapper[4760]: I1125 08:59:29.317761 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe47840112cf413f620cdb60258a588dce93abb990efe5f6739e05905d8c3b1" Nov 25 08:59:29 crc kubenswrapper[4760]: I1125 08:59:29.318149 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp" Nov 25 08:59:31 crc kubenswrapper[4760]: I1125 08:59:31.746325 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:59:31 crc kubenswrapper[4760]: I1125 08:59:31.746615 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:59:31 crc kubenswrapper[4760]: I1125 08:59:31.746697 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 08:59:31 crc kubenswrapper[4760]: I1125 08:59:31.747460 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13e9ce4d6ea90c9d403df75bea2e9a8044a9729da91e45cf4a3c2a094df970e2"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:59:31 crc kubenswrapper[4760]: I1125 08:59:31.747528 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://13e9ce4d6ea90c9d403df75bea2e9a8044a9729da91e45cf4a3c2a094df970e2" gracePeriod=600 Nov 25 08:59:32 crc kubenswrapper[4760]: I1125 08:59:32.351275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"13e9ce4d6ea90c9d403df75bea2e9a8044a9729da91e45cf4a3c2a094df970e2"} Nov 25 08:59:32 crc kubenswrapper[4760]: I1125 08:59:32.351643 4760 scope.go:117] "RemoveContainer" containerID="4057b5136225f1c50ae348f5e87b8508898bc68053c996f9d01a1b279482ce72" Nov 25 08:59:32 crc kubenswrapper[4760]: I1125 08:59:32.351226 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="13e9ce4d6ea90c9d403df75bea2e9a8044a9729da91e45cf4a3c2a094df970e2" exitCode=0 Nov 25 08:59:32 crc kubenswrapper[4760]: I1125 08:59:32.351713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0"} Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.428697 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 08:59:44 crc kubenswrapper[4760]: E1125 08:59:44.429885 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="extract-content" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.429903 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="extract-content" Nov 25 08:59:44 crc kubenswrapper[4760]: E1125 08:59:44.429939 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="registry-server" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.429947 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="registry-server" Nov 25 08:59:44 crc kubenswrapper[4760]: E1125 08:59:44.429962 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515be97b-ca6d-43a0-b8a1-471a782240bc" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.429972 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="515be97b-ca6d-43a0-b8a1-471a782240bc" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 08:59:44 crc kubenswrapper[4760]: E1125 08:59:44.429988 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="extract-utilities" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.429996 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="extract-utilities" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.430194 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0263e68-c70d-422b-a0d2-314217257caf" containerName="registry-server" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.430220 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="515be97b-ca6d-43a0-b8a1-471a782240bc" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.431460 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.437068 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.437327 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.452142 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.460830 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.466680 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.531821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.538841 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-config-data\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-scripts\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543439 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577wr\" (UniqueName: \"kubernetes.io/projected/09dd7945-dda4-4682-b55e-44569ec2bc78-kube-api-access-577wr\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543491 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-nvme\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543591 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-lib-modules\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543714 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-run\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543774 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-config-data-custom\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09dd7945-dda4-4682-b55e-44569ec2bc78-ceph\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543952 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-dev\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.543991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.544025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-sys\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.544053 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.544086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.574101 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.646898 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzn6\" (UniqueName: \"kubernetes.io/projected/f4f729ff-1806-4032-922b-2a47e4a9d7ff-kube-api-access-jfzn6\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-dev\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647495 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-dev\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-sys\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647735 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647777 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647862 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.647959 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-sys\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648351 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-config-data\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648497 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648589 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-scripts\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648889 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.648961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649061 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-577wr\" (UniqueName: \"kubernetes.io/projected/09dd7945-dda4-4682-b55e-44569ec2bc78-kube-api-access-577wr\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-run\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649328 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-nvme\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f4f729ff-1806-4032-922b-2a47e4a9d7ff-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-lib-modules\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-run\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649838 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-config-data-custom\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.649913 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650082 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09dd7945-dda4-4682-b55e-44569ec2bc78-ceph\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-nvme\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650831 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650885 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-lib-modules\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.650921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-run\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.651521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/09dd7945-dda4-4682-b55e-44569ec2bc78-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.654122 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-scripts\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.655167 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-config-data\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.657152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/09dd7945-dda4-4682-b55e-44569ec2bc78-ceph\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.661438 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-config-data-custom\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.662086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09dd7945-dda4-4682-b55e-44569ec2bc78-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.672527 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-577wr\" (UniqueName: \"kubernetes.io/projected/09dd7945-dda4-4682-b55e-44569ec2bc78-kube-api-access-577wr\") pod \"cinder-backup-0\" (UID: \"09dd7945-dda4-4682-b55e-44569ec2bc78\") " pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752085 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752175 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752206 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752199 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-run\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f4f729ff-1806-4032-922b-2a47e4a9d7ff-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752412 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-run\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752550 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfzn6\" (UniqueName: \"kubernetes.io/projected/f4f729ff-1806-4032-922b-2a47e4a9d7ff-kube-api-access-jfzn6\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752691 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752724 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752764 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752795 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752844 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752982 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.752666 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.753187 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.753284 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.753295 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.753315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4f729ff-1806-4032-922b-2a47e4a9d7ff-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.756439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f4f729ff-1806-4032-922b-2a47e4a9d7ff-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.756823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.756839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.756899 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.759875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f729ff-1806-4032-922b-2a47e4a9d7ff-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.770865 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfzn6\" (UniqueName: \"kubernetes.io/projected/f4f729ff-1806-4032-922b-2a47e4a9d7ff-kube-api-access-jfzn6\") pod \"cinder-volume-volume1-0\" (UID: \"f4f729ff-1806-4032-922b-2a47e4a9d7ff\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.778964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.835765 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.994551 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-nh6wn"] Nov 25 08:59:44 crc kubenswrapper[4760]: I1125 08:59:44.996158 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.005833 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-nh6wn"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.052101 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-f2bf-account-create-gnm6f"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.054321 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.056639 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.058772 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f12d4c-8065-4ae2-835e-dd2cd09160a6-operator-scripts\") pod \"manila-db-create-nh6wn\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.058920 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzxl\" (UniqueName: \"kubernetes.io/projected/90f12d4c-8065-4ae2-835e-dd2cd09160a6-kube-api-access-lgzxl\") pod \"manila-db-create-nh6wn\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.066317 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-f2bf-account-create-gnm6f"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.162492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzxl\" (UniqueName: \"kubernetes.io/projected/90f12d4c-8065-4ae2-835e-dd2cd09160a6-kube-api-access-lgzxl\") pod \"manila-db-create-nh6wn\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.162557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-operator-scripts\") pod \"manila-f2bf-account-create-gnm6f\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.162600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw478\" (UniqueName: \"kubernetes.io/projected/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-kube-api-access-mw478\") pod \"manila-f2bf-account-create-gnm6f\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.162684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f12d4c-8065-4ae2-835e-dd2cd09160a6-operator-scripts\") pod \"manila-db-create-nh6wn\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.163582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f12d4c-8065-4ae2-835e-dd2cd09160a6-operator-scripts\") pod \"manila-db-create-nh6wn\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.214564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzxl\" (UniqueName: \"kubernetes.io/projected/90f12d4c-8065-4ae2-835e-dd2cd09160a6-kube-api-access-lgzxl\") pod \"manila-db-create-nh6wn\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.264474 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-operator-scripts\") pod \"manila-f2bf-account-create-gnm6f\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.264535 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw478\" (UniqueName: \"kubernetes.io/projected/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-kube-api-access-mw478\") pod \"manila-f2bf-account-create-gnm6f\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.264805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.265550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-operator-scripts\") pod \"manila-f2bf-account-create-gnm6f\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.266326 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.271515 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.271721 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ngxbf" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.271894 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.272033 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.304802 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw478\" (UniqueName: \"kubernetes.io/projected/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-kube-api-access-mw478\") pod \"manila-f2bf-account-create-gnm6f\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.317090 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.370314 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.371920 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.376742 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.384808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.386681 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.400722 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.413196 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.473198 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-config-data\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.481585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.481635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-ceph\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.481725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.481806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-scripts\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.481852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.482049 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-logs\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.482186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.482204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtr4\" (UniqueName: \"kubernetes.io/projected/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-kube-api-access-6gtr4\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.586710 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cad7cc0f-3821-44ee-8b39-71988664ee4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.586772 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cad7cc0f-3821-44ee-8b39-71988664ee4e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.586801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.586856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.586884 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gtr4\" (UniqueName: \"kubernetes.io/projected/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-kube-api-access-6gtr4\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.586939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-config-data\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.586979 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587006 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-ceph\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587083 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7sqc\" (UniqueName: \"kubernetes.io/projected/cad7cc0f-3821-44ee-8b39-71988664ee4e-kube-api-access-p7sqc\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-scripts\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587288 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad7cc0f-3821-44ee-8b39-71988664ee4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587326 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587356 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.587388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-logs\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.588050 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-logs\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.588380 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.595344 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.596362 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.597290 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-config-data\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.601746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-ceph\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.601998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.602538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-scripts\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.623121 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gtr4\" (UniqueName: \"kubernetes.io/projected/a3c90ae6-873c-4a00-84a0-a9a60fcc7c74-kube-api-access-6gtr4\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.630772 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.636810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74\") " pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad7cc0f-3821-44ee-8b39-71988664ee4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690510 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cad7cc0f-3821-44ee-8b39-71988664ee4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cad7cc0f-3821-44ee-8b39-71988664ee4e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690628 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7sqc\" (UniqueName: \"kubernetes.io/projected/cad7cc0f-3821-44ee-8b39-71988664ee4e-kube-api-access-p7sqc\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.690840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.691870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cad7cc0f-3821-44ee-8b39-71988664ee4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.692174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cad7cc0f-3821-44ee-8b39-71988664ee4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.694869 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.700940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cad7cc0f-3821-44ee-8b39-71988664ee4e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.704228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.706741 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.724306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.725114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7sqc\" (UniqueName: \"kubernetes.io/projected/cad7cc0f-3821-44ee-8b39-71988664ee4e-kube-api-access-p7sqc\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.731905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cad7cc0f-3821-44ee-8b39-71988664ee4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.750864 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"cad7cc0f-3821-44ee-8b39-71988664ee4e\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.898579 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 08:59:45 crc kubenswrapper[4760]: I1125 08:59:45.966734 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-nh6wn"] Nov 25 08:59:45 crc kubenswrapper[4760]: W1125 08:59:45.975663 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f12d4c_8065_4ae2_835e_dd2cd09160a6.slice/crio-3766773f3bab9206c5f41956d24c5aa4c64a62136ec6f08bcb78ad6569283ca2 WatchSource:0}: Error finding container 3766773f3bab9206c5f41956d24c5aa4c64a62136ec6f08bcb78ad6569283ca2: Status 404 returned error can't find the container with id 3766773f3bab9206c5f41956d24c5aa4c64a62136ec6f08bcb78ad6569283ca2 Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.011870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.116887 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-f2bf-account-create-gnm6f"] Nov 25 08:59:46 crc kubenswrapper[4760]: W1125 08:59:46.120408 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d23dc6f_cedb_4acd_9107_f39d6ed0f903.slice/crio-8e76de18b257a64589d185529bbababbc7dbe5da6e301fe6be1c4dcfb187d615 WatchSource:0}: Error finding container 8e76de18b257a64589d185529bbababbc7dbe5da6e301fe6be1c4dcfb187d615: Status 404 returned error can't find the container with id 8e76de18b257a64589d185529bbababbc7dbe5da6e301fe6be1c4dcfb187d615 Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.222237 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.525839 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.571950 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f2bf-account-create-gnm6f" event={"ID":"4d23dc6f-cedb-4acd-9107-f39d6ed0f903","Type":"ContainerStarted","Data":"407840f261917036f4bf5db662948095e03fe61844c4491274ad88bb777d6122"} Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.572030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f2bf-account-create-gnm6f" event={"ID":"4d23dc6f-cedb-4acd-9107-f39d6ed0f903","Type":"ContainerStarted","Data":"8e76de18b257a64589d185529bbababbc7dbe5da6e301fe6be1c4dcfb187d615"} Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.575902 4760 generic.go:334] "Generic (PLEG): container finished" podID="90f12d4c-8065-4ae2-835e-dd2cd09160a6" containerID="9411ad2f204a8e4667ab5abcc3c500b08a4ad1d8fe7721925e881d59c50f391a" exitCode=0 Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.575967 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-nh6wn" event={"ID":"90f12d4c-8065-4ae2-835e-dd2cd09160a6","Type":"ContainerDied","Data":"9411ad2f204a8e4667ab5abcc3c500b08a4ad1d8fe7721925e881d59c50f391a"} Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.575993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-nh6wn" event={"ID":"90f12d4c-8065-4ae2-835e-dd2cd09160a6","Type":"ContainerStarted","Data":"3766773f3bab9206c5f41956d24c5aa4c64a62136ec6f08bcb78ad6569283ca2"} Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.582611 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f4f729ff-1806-4032-922b-2a47e4a9d7ff","Type":"ContainerStarted","Data":"ad6402b3775a61a027b286e642e9ba50adaec69d4b4a196a84c359d6cc6fbc8c"} Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.586327 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"09dd7945-dda4-4682-b55e-44569ec2bc78","Type":"ContainerStarted","Data":"a6e10a1cefa34bb1ece358b61645405f3a945f636de5bc008073c2e569dfbb8b"} Nov 25 08:59:46 crc kubenswrapper[4760]: I1125 08:59:46.689308 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 08:59:46 crc kubenswrapper[4760]: W1125 08:59:46.752884 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcad7cc0f_3821_44ee_8b39_71988664ee4e.slice/crio-c983df9a7903bcd50d9b6827c4da22adcc3ca3f75ded1ec5ef2093219158e9c5 WatchSource:0}: Error finding container c983df9a7903bcd50d9b6827c4da22adcc3ca3f75ded1ec5ef2093219158e9c5: Status 404 returned error can't find the container with id c983df9a7903bcd50d9b6827c4da22adcc3ca3f75ded1ec5ef2093219158e9c5 Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.606672 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d23dc6f-cedb-4acd-9107-f39d6ed0f903" containerID="407840f261917036f4bf5db662948095e03fe61844c4491274ad88bb777d6122" exitCode=0 Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.606749 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f2bf-account-create-gnm6f" event={"ID":"4d23dc6f-cedb-4acd-9107-f39d6ed0f903","Type":"ContainerDied","Data":"407840f261917036f4bf5db662948095e03fe61844c4491274ad88bb777d6122"} Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.612592 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cad7cc0f-3821-44ee-8b39-71988664ee4e","Type":"ContainerStarted","Data":"ff540b75c92ef8e1f6c854ee64537f3cd030b16ed3acbc8c4acc6fe5546ff5c9"} Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.612644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cad7cc0f-3821-44ee-8b39-71988664ee4e","Type":"ContainerStarted","Data":"c983df9a7903bcd50d9b6827c4da22adcc3ca3f75ded1ec5ef2093219158e9c5"} Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.618908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f4f729ff-1806-4032-922b-2a47e4a9d7ff","Type":"ContainerStarted","Data":"f524f14a9b912e5e96bd8a9674af2f8fb0b045df9d660606a93a07d9fd12ca08"} Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.618961 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f4f729ff-1806-4032-922b-2a47e4a9d7ff","Type":"ContainerStarted","Data":"8d93fddbcc361d4d93fc378ae0349323895d8f199df526c6d33fd91a3edff417"} Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.627626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74","Type":"ContainerStarted","Data":"5c1517827ae00cb3b4141d89d542a1e89426f31dd4369dee768ee95fec4e578a"} Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.627670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74","Type":"ContainerStarted","Data":"6065a204411eb8335b20ecd2c73b59f5164c1e0aa43d36bf1c921a912cfda683"} Nov 25 08:59:47 crc kubenswrapper[4760]: I1125 08:59:47.654746 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.672134885 podStartE2EDuration="3.654723109s" podCreationTimestamp="2025-11-25 08:59:44 +0000 UTC" firstStartedPulling="2025-11-25 08:59:45.634365077 +0000 UTC m=+2919.343395872" lastFinishedPulling="2025-11-25 08:59:46.616953301 +0000 UTC m=+2920.325984096" observedRunningTime="2025-11-25 08:59:47.648699857 +0000 UTC m=+2921.357730652" watchObservedRunningTime="2025-11-25 08:59:47.654723109 +0000 UTC m=+2921.363753904" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.190466 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.211279 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.370403 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgzxl\" (UniqueName: \"kubernetes.io/projected/90f12d4c-8065-4ae2-835e-dd2cd09160a6-kube-api-access-lgzxl\") pod \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.370570 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f12d4c-8065-4ae2-835e-dd2cd09160a6-operator-scripts\") pod \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\" (UID: \"90f12d4c-8065-4ae2-835e-dd2cd09160a6\") " Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.370601 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-operator-scripts\") pod \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.370739 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw478\" (UniqueName: \"kubernetes.io/projected/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-kube-api-access-mw478\") pod \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\" (UID: \"4d23dc6f-cedb-4acd-9107-f39d6ed0f903\") " Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.372602 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d23dc6f-cedb-4acd-9107-f39d6ed0f903" (UID: "4d23dc6f-cedb-4acd-9107-f39d6ed0f903"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.372602 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f12d4c-8065-4ae2-835e-dd2cd09160a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90f12d4c-8065-4ae2-835e-dd2cd09160a6" (UID: "90f12d4c-8065-4ae2-835e-dd2cd09160a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.378011 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-kube-api-access-mw478" (OuterVolumeSpecName: "kube-api-access-mw478") pod "4d23dc6f-cedb-4acd-9107-f39d6ed0f903" (UID: "4d23dc6f-cedb-4acd-9107-f39d6ed0f903"). InnerVolumeSpecName "kube-api-access-mw478". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.381439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f12d4c-8065-4ae2-835e-dd2cd09160a6-kube-api-access-lgzxl" (OuterVolumeSpecName: "kube-api-access-lgzxl") pod "90f12d4c-8065-4ae2-835e-dd2cd09160a6" (UID: "90f12d4c-8065-4ae2-835e-dd2cd09160a6"). InnerVolumeSpecName "kube-api-access-lgzxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.474295 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.474340 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw478\" (UniqueName: \"kubernetes.io/projected/4d23dc6f-cedb-4acd-9107-f39d6ed0f903-kube-api-access-mw478\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.474356 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgzxl\" (UniqueName: \"kubernetes.io/projected/90f12d4c-8065-4ae2-835e-dd2cd09160a6-kube-api-access-lgzxl\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.474374 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f12d4c-8065-4ae2-835e-dd2cd09160a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.654614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f2bf-account-create-gnm6f" event={"ID":"4d23dc6f-cedb-4acd-9107-f39d6ed0f903","Type":"ContainerDied","Data":"8e76de18b257a64589d185529bbababbc7dbe5da6e301fe6be1c4dcfb187d615"} Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.654655 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e76de18b257a64589d185529bbababbc7dbe5da6e301fe6be1c4dcfb187d615" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.654714 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f2bf-account-create-gnm6f" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.676938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cad7cc0f-3821-44ee-8b39-71988664ee4e","Type":"ContainerStarted","Data":"ceddbe73f8a721b9fcd53ff27f949b30c7bd6e2dd72e36ed82209a25866628b2"} Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.697581 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-nh6wn" event={"ID":"90f12d4c-8065-4ae2-835e-dd2cd09160a6","Type":"ContainerDied","Data":"3766773f3bab9206c5f41956d24c5aa4c64a62136ec6f08bcb78ad6569283ca2"} Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.697634 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3766773f3bab9206c5f41956d24c5aa4c64a62136ec6f08bcb78ad6569283ca2" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.697672 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-nh6wn" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.720886 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"09dd7945-dda4-4682-b55e-44569ec2bc78","Type":"ContainerStarted","Data":"506e6a6c8bc76ab6e12e598029e4ce7f42b87340c4cd714d23c6469b68f3b2a4"} Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.720949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"09dd7945-dda4-4682-b55e-44569ec2bc78","Type":"ContainerStarted","Data":"7d5d4be119bc74f96bd0bab19cc3ea585a7f943bb740a1c4292349e8995b15c0"} Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.726291 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3c90ae6-873c-4a00-84a0-a9a60fcc7c74","Type":"ContainerStarted","Data":"4d6c294c04d12e88781c10eaed1931a2ad932080308efdb0c00936586ccf797d"} Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.728935 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.728914816 podStartE2EDuration="4.728914816s" podCreationTimestamp="2025-11-25 08:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:59:48.726516927 +0000 UTC m=+2922.435547722" watchObservedRunningTime="2025-11-25 08:59:48.728914816 +0000 UTC m=+2922.437945611" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.779318 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.651222329 podStartE2EDuration="4.779275563s" podCreationTimestamp="2025-11-25 08:59:44 +0000 UTC" firstStartedPulling="2025-11-25 08:59:46.234257003 +0000 UTC m=+2919.943287798" lastFinishedPulling="2025-11-25 08:59:47.362310237 +0000 UTC m=+2921.071341032" observedRunningTime="2025-11-25 08:59:48.769749071 +0000 UTC m=+2922.478779876" watchObservedRunningTime="2025-11-25 08:59:48.779275563 +0000 UTC m=+2922.488306368" Nov 25 08:59:48 crc kubenswrapper[4760]: I1125 08:59:48.829332 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.82930001 podStartE2EDuration="4.82930001s" podCreationTimestamp="2025-11-25 08:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:59:48.825979645 +0000 UTC m=+2922.535010460" watchObservedRunningTime="2025-11-25 08:59:48.82930001 +0000 UTC m=+2922.538330795" Nov 25 08:59:49 crc kubenswrapper[4760]: I1125 08:59:49.779341 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:49 crc kubenswrapper[4760]: I1125 08:59:49.835970 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.474975 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-pqtpz"] Nov 25 08:59:50 crc kubenswrapper[4760]: E1125 08:59:50.475415 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d23dc6f-cedb-4acd-9107-f39d6ed0f903" containerName="mariadb-account-create" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.475432 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d23dc6f-cedb-4acd-9107-f39d6ed0f903" containerName="mariadb-account-create" Nov 25 08:59:50 crc kubenswrapper[4760]: E1125 08:59:50.475450 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f12d4c-8065-4ae2-835e-dd2cd09160a6" containerName="mariadb-database-create" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.475457 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f12d4c-8065-4ae2-835e-dd2cd09160a6" containerName="mariadb-database-create" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.475647 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f12d4c-8065-4ae2-835e-dd2cd09160a6" containerName="mariadb-database-create" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.475667 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d23dc6f-cedb-4acd-9107-f39d6ed0f903" containerName="mariadb-account-create" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.476319 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.478708 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.478975 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p67ht" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.486971 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-pqtpz"] Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.533507 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46qh\" (UniqueName: \"kubernetes.io/projected/dcbad6e1-fbdc-43fb-8295-40975fd98c69-kube-api-access-j46qh\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.533547 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-combined-ca-bundle\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.533686 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-config-data\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.533722 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-job-config-data\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.635415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-config-data\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.635490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-job-config-data\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.635615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46qh\" (UniqueName: \"kubernetes.io/projected/dcbad6e1-fbdc-43fb-8295-40975fd98c69-kube-api-access-j46qh\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.635644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-combined-ca-bundle\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.642848 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-job-config-data\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.643062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-config-data\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.647109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-combined-ca-bundle\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.655965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46qh\" (UniqueName: \"kubernetes.io/projected/dcbad6e1-fbdc-43fb-8295-40975fd98c69-kube-api-access-j46qh\") pod \"manila-db-sync-pqtpz\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:50 crc kubenswrapper[4760]: I1125 08:59:50.802028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-pqtpz" Nov 25 08:59:51 crc kubenswrapper[4760]: I1125 08:59:51.429866 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-pqtpz"] Nov 25 08:59:51 crc kubenswrapper[4760]: W1125 08:59:51.437406 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcbad6e1_fbdc_43fb_8295_40975fd98c69.slice/crio-74aa663b520bf07bc02a52d62fb3f91fc868ec71b0d93af7728cf138593cefad WatchSource:0}: Error finding container 74aa663b520bf07bc02a52d62fb3f91fc868ec71b0d93af7728cf138593cefad: Status 404 returned error can't find the container with id 74aa663b520bf07bc02a52d62fb3f91fc868ec71b0d93af7728cf138593cefad Nov 25 08:59:51 crc kubenswrapper[4760]: I1125 08:59:51.749909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-pqtpz" event={"ID":"dcbad6e1-fbdc-43fb-8295-40975fd98c69","Type":"ContainerStarted","Data":"74aa663b520bf07bc02a52d62fb3f91fc868ec71b0d93af7728cf138593cefad"} Nov 25 08:59:55 crc kubenswrapper[4760]: I1125 08:59:55.002495 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 25 08:59:55 crc kubenswrapper[4760]: I1125 08:59:55.056080 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 25 08:59:55 crc kubenswrapper[4760]: I1125 08:59:55.899612 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 08:59:55 crc kubenswrapper[4760]: I1125 08:59:55.899920 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 08:59:55 crc kubenswrapper[4760]: I1125 08:59:55.938902 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 08:59:55 crc kubenswrapper[4760]: I1125 08:59:55.944273 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.012345 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.012410 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.061078 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.071830 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.824585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-pqtpz" event={"ID":"dcbad6e1-fbdc-43fb-8295-40975fd98c69","Type":"ContainerStarted","Data":"c392ecd0f3e335726d7bdfe7588957137a3b24844f83da30c849ceac47448fe3"} Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.824919 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.824940 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.824954 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 08:59:56 crc kubenswrapper[4760]: I1125 08:59:56.824965 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.839430 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.841373 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.841406 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.841692 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.863559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.864472 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.868746 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 08:59:58 crc kubenswrapper[4760]: I1125 08:59:58.884315 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-pqtpz" podStartSLOduration=4.165724918 podStartE2EDuration="8.884297319s" podCreationTimestamp="2025-11-25 08:59:50 +0000 UTC" firstStartedPulling="2025-11-25 08:59:51.44200066 +0000 UTC m=+2925.151031455" lastFinishedPulling="2025-11-25 08:59:56.160573061 +0000 UTC m=+2929.869603856" observedRunningTime="2025-11-25 08:59:56.851587476 +0000 UTC m=+2930.560618271" watchObservedRunningTime="2025-11-25 08:59:58.884297319 +0000 UTC m=+2932.593328114" Nov 25 08:59:59 crc kubenswrapper[4760]: I1125 08:59:59.007354 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.168143 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl"] Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.170553 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.177227 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.177266 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.182204 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl"] Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.363207 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhj2\" (UniqueName: \"kubernetes.io/projected/0fe1b33a-f393-4568-b5f0-7e2c57083e36-kube-api-access-fdhj2\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.363267 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fe1b33a-f393-4568-b5f0-7e2c57083e36-secret-volume\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.363774 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fe1b33a-f393-4568-b5f0-7e2c57083e36-config-volume\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.465721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fe1b33a-f393-4568-b5f0-7e2c57083e36-config-volume\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.465804 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdhj2\" (UniqueName: \"kubernetes.io/projected/0fe1b33a-f393-4568-b5f0-7e2c57083e36-kube-api-access-fdhj2\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.465858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fe1b33a-f393-4568-b5f0-7e2c57083e36-secret-volume\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.467532 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fe1b33a-f393-4568-b5f0-7e2c57083e36-config-volume\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.479318 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fe1b33a-f393-4568-b5f0-7e2c57083e36-secret-volume\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.485170 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdhj2\" (UniqueName: \"kubernetes.io/projected/0fe1b33a-f393-4568-b5f0-7e2c57083e36-kube-api-access-fdhj2\") pod \"collect-profiles-29401020-rvgpl\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.503750 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:00 crc kubenswrapper[4760]: I1125 09:00:00.971695 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl"] Nov 25 09:00:00 crc kubenswrapper[4760]: W1125 09:00:00.980929 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fe1b33a_f393_4568_b5f0_7e2c57083e36.slice/crio-cf50077829faa0b22a5fe28ba71226c57e5c34577f17e4fa09cb6678f07e393b WatchSource:0}: Error finding container cf50077829faa0b22a5fe28ba71226c57e5c34577f17e4fa09cb6678f07e393b: Status 404 returned error can't find the container with id cf50077829faa0b22a5fe28ba71226c57e5c34577f17e4fa09cb6678f07e393b Nov 25 09:00:01 crc kubenswrapper[4760]: I1125 09:00:01.863784 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" event={"ID":"0fe1b33a-f393-4568-b5f0-7e2c57083e36","Type":"ContainerStarted","Data":"cf50077829faa0b22a5fe28ba71226c57e5c34577f17e4fa09cb6678f07e393b"} Nov 25 09:00:02 crc kubenswrapper[4760]: I1125 09:00:02.875203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" event={"ID":"0fe1b33a-f393-4568-b5f0-7e2c57083e36","Type":"ContainerStarted","Data":"c49a0c5b698ef50bc3a85fef5dc2fdcfded5d751afce3e2ac6b7e0a06a9d9016"} Nov 25 09:00:03 crc kubenswrapper[4760]: I1125 09:00:03.886699 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fe1b33a-f393-4568-b5f0-7e2c57083e36" containerID="c49a0c5b698ef50bc3a85fef5dc2fdcfded5d751afce3e2ac6b7e0a06a9d9016" exitCode=0 Nov 25 09:00:03 crc kubenswrapper[4760]: I1125 09:00:03.886831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" event={"ID":"0fe1b33a-f393-4568-b5f0-7e2c57083e36","Type":"ContainerDied","Data":"c49a0c5b698ef50bc3a85fef5dc2fdcfded5d751afce3e2ac6b7e0a06a9d9016"} Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.217655 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.361445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fe1b33a-f393-4568-b5f0-7e2c57083e36-config-volume\") pod \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.361580 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdhj2\" (UniqueName: \"kubernetes.io/projected/0fe1b33a-f393-4568-b5f0-7e2c57083e36-kube-api-access-fdhj2\") pod \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.361661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fe1b33a-f393-4568-b5f0-7e2c57083e36-secret-volume\") pod \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\" (UID: \"0fe1b33a-f393-4568-b5f0-7e2c57083e36\") " Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.362347 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe1b33a-f393-4568-b5f0-7e2c57083e36-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fe1b33a-f393-4568-b5f0-7e2c57083e36" (UID: "0fe1b33a-f393-4568-b5f0-7e2c57083e36"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.370810 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe1b33a-f393-4568-b5f0-7e2c57083e36-kube-api-access-fdhj2" (OuterVolumeSpecName: "kube-api-access-fdhj2") pod "0fe1b33a-f393-4568-b5f0-7e2c57083e36" (UID: "0fe1b33a-f393-4568-b5f0-7e2c57083e36"). InnerVolumeSpecName "kube-api-access-fdhj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.370890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe1b33a-f393-4568-b5f0-7e2c57083e36-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fe1b33a-f393-4568-b5f0-7e2c57083e36" (UID: "0fe1b33a-f393-4568-b5f0-7e2c57083e36"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.467743 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fe1b33a-f393-4568-b5f0-7e2c57083e36-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.468044 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdhj2\" (UniqueName: \"kubernetes.io/projected/0fe1b33a-f393-4568-b5f0-7e2c57083e36-kube-api-access-fdhj2\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.468057 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fe1b33a-f393-4568-b5f0-7e2c57083e36-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.907729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" event={"ID":"0fe1b33a-f393-4568-b5f0-7e2c57083e36","Type":"ContainerDied","Data":"cf50077829faa0b22a5fe28ba71226c57e5c34577f17e4fa09cb6678f07e393b"} Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.907766 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf50077829faa0b22a5fe28ba71226c57e5c34577f17e4fa09cb6678f07e393b" Nov 25 09:00:05 crc kubenswrapper[4760]: I1125 09:00:05.907825 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl" Nov 25 09:00:06 crc kubenswrapper[4760]: I1125 09:00:06.288207 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w"] Nov 25 09:00:06 crc kubenswrapper[4760]: I1125 09:00:06.297802 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-llf4w"] Nov 25 09:00:06 crc kubenswrapper[4760]: I1125 09:00:06.952959 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c22b69-2113-4060-9ec8-fea251da8846" path="/var/lib/kubelet/pods/e8c22b69-2113-4060-9ec8-fea251da8846/volumes" Nov 25 09:00:29 crc kubenswrapper[4760]: I1125 09:00:29.125674 4760 generic.go:334] "Generic (PLEG): container finished" podID="dcbad6e1-fbdc-43fb-8295-40975fd98c69" containerID="c392ecd0f3e335726d7bdfe7588957137a3b24844f83da30c849ceac47448fe3" exitCode=0 Nov 25 09:00:29 crc kubenswrapper[4760]: I1125 09:00:29.125907 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-pqtpz" event={"ID":"dcbad6e1-fbdc-43fb-8295-40975fd98c69","Type":"ContainerDied","Data":"c392ecd0f3e335726d7bdfe7588957137a3b24844f83da30c849ceac47448fe3"} Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.554330 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-pqtpz" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.667966 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-config-data\") pod \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.668070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46qh\" (UniqueName: \"kubernetes.io/projected/dcbad6e1-fbdc-43fb-8295-40975fd98c69-kube-api-access-j46qh\") pod \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.668385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-combined-ca-bundle\") pod \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.668612 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-job-config-data\") pod \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\" (UID: \"dcbad6e1-fbdc-43fb-8295-40975fd98c69\") " Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.677334 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "dcbad6e1-fbdc-43fb-8295-40975fd98c69" (UID: "dcbad6e1-fbdc-43fb-8295-40975fd98c69"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.685076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-config-data" (OuterVolumeSpecName: "config-data") pod "dcbad6e1-fbdc-43fb-8295-40975fd98c69" (UID: "dcbad6e1-fbdc-43fb-8295-40975fd98c69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.685382 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbad6e1-fbdc-43fb-8295-40975fd98c69-kube-api-access-j46qh" (OuterVolumeSpecName: "kube-api-access-j46qh") pod "dcbad6e1-fbdc-43fb-8295-40975fd98c69" (UID: "dcbad6e1-fbdc-43fb-8295-40975fd98c69"). InnerVolumeSpecName "kube-api-access-j46qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.731569 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcbad6e1-fbdc-43fb-8295-40975fd98c69" (UID: "dcbad6e1-fbdc-43fb-8295-40975fd98c69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.771741 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.771783 4760 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.771794 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbad6e1-fbdc-43fb-8295-40975fd98c69-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:30 crc kubenswrapper[4760]: I1125 09:00:30.771803 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46qh\" (UniqueName: \"kubernetes.io/projected/dcbad6e1-fbdc-43fb-8295-40975fd98c69-kube-api-access-j46qh\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.145470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-pqtpz" event={"ID":"dcbad6e1-fbdc-43fb-8295-40975fd98c69","Type":"ContainerDied","Data":"74aa663b520bf07bc02a52d62fb3f91fc868ec71b0d93af7728cf138593cefad"} Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.145520 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74aa663b520bf07bc02a52d62fb3f91fc868ec71b0d93af7728cf138593cefad" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.145676 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-pqtpz" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.444319 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:31 crc kubenswrapper[4760]: E1125 09:00:31.445143 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe1b33a-f393-4568-b5f0-7e2c57083e36" containerName="collect-profiles" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.445159 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe1b33a-f393-4568-b5f0-7e2c57083e36" containerName="collect-profiles" Nov 25 09:00:31 crc kubenswrapper[4760]: E1125 09:00:31.445202 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbad6e1-fbdc-43fb-8295-40975fd98c69" containerName="manila-db-sync" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.445210 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbad6e1-fbdc-43fb-8295-40975fd98c69" containerName="manila-db-sync" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.445448 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbad6e1-fbdc-43fb-8295-40975fd98c69" containerName="manila-db-sync" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.445466 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe1b33a-f393-4568-b5f0-7e2c57083e36" containerName="collect-profiles" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.446764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.450473 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.457614 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.457820 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.458011 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p67ht" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.493838 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591661 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7s9r\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-kube-api-access-j7s9r\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591707 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-scripts\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591736 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591778 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-ceph\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.591952 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.599405 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.607182 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.615043 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.631380 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.693819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.693861 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.693910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7s9r\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-kube-api-access-j7s9r\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.693935 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-scripts\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.693976 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694005 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6791bbe-2777-4891-a7dd-7622d9af1bc9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-ceph\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694069 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694126 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694170 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/f6791bbe-2777-4891-a7dd-7622d9af1bc9-kube-api-access-swhp7\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-scripts\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694226 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.694408 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.695701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.695900 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.705899 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-scripts\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.720180 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.720598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-ceph\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.720741 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.721522 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.771420 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7s9r\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-kube-api-access-j7s9r\") pod \"manila-share-share1-0\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.806537 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6791bbe-2777-4891-a7dd-7622d9af1bc9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.806663 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.806698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.806732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/f6791bbe-2777-4891-a7dd-7622d9af1bc9-kube-api-access-swhp7\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.806770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.806789 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-scripts\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.825735 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.826476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6791bbe-2777-4891-a7dd-7622d9af1bc9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.848110 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6885d49d55-9mqqw"] Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.853575 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.861023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.867997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-scripts\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.890336 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.895337 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6885d49d55-9mqqw"] Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.919640 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.925876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/f6791bbe-2777-4891-a7dd-7622d9af1bc9-kube-api-access-swhp7\") pod \"manila-scheduler-0\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.962890 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.987591 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.989271 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 09:00:31 crc kubenswrapper[4760]: I1125 09:00:31.995850 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.017059 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.020345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8kj\" (UniqueName: \"kubernetes.io/projected/1b305350-e74d-4e9a-8af0-14e88ddfccc0-kube-api-access-ft8kj\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.020452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-config\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.020525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-ovsdbserver-sb\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.020598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-dns-svc\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.020636 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.020669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-ovsdbserver-nb\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.122044 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.122144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-config\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.122175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9shhb\" (UniqueName: \"kubernetes.io/projected/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-kube-api-access-9shhb\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123165 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-logs\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123219 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-ovsdbserver-sb\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123289 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-dns-svc\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data-custom\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123446 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-etc-machine-id\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123499 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-ovsdbserver-nb\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123531 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-scripts\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.123548 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8kj\" (UniqueName: \"kubernetes.io/projected/1b305350-e74d-4e9a-8af0-14e88ddfccc0-kube-api-access-ft8kj\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.127356 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-config\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.127930 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-ovsdbserver-sb\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.128969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-dns-svc\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.129495 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.130291 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b305350-e74d-4e9a-8af0-14e88ddfccc0-ovsdbserver-nb\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.147496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8kj\" (UniqueName: \"kubernetes.io/projected/1b305350-e74d-4e9a-8af0-14e88ddfccc0-kube-api-access-ft8kj\") pod \"dnsmasq-dns-6885d49d55-9mqqw\" (UID: \"1b305350-e74d-4e9a-8af0-14e88ddfccc0\") " pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.225476 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9shhb\" (UniqueName: \"kubernetes.io/projected/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-kube-api-access-9shhb\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.225944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-logs\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.226484 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-logs\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.226647 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.227766 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data-custom\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.227892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-etc-machine-id\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.227945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-scripts\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.228017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.232302 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.232779 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-etc-machine-id\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.238419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-scripts\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.239094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.239637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data-custom\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.245747 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9shhb\" (UniqueName: \"kubernetes.io/projected/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-kube-api-access-9shhb\") pod \"manila-api-0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.388417 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.399708 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.532338 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:32 crc kubenswrapper[4760]: I1125 09:00:32.719219 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:32 crc kubenswrapper[4760]: W1125 09:00:32.730389 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6791bbe_2777_4891_a7dd_7622d9af1bc9.slice/crio-47753af1d3512d1fdd740253a24749a88e2446a07aebb523c374696acf8c87d5 WatchSource:0}: Error finding container 47753af1d3512d1fdd740253a24749a88e2446a07aebb523c374696acf8c87d5: Status 404 returned error can't find the container with id 47753af1d3512d1fdd740253a24749a88e2446a07aebb523c374696acf8c87d5 Nov 25 09:00:33 crc kubenswrapper[4760]: I1125 09:00:33.017750 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6885d49d55-9mqqw"] Nov 25 09:00:33 crc kubenswrapper[4760]: W1125 09:00:33.024475 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b305350_e74d_4e9a_8af0_14e88ddfccc0.slice/crio-1fc1805a6cba9c72f03640dbede390652ae4d7abca0ec164b397ff8408fbf005 WatchSource:0}: Error finding container 1fc1805a6cba9c72f03640dbede390652ae4d7abca0ec164b397ff8408fbf005: Status 404 returned error can't find the container with id 1fc1805a6cba9c72f03640dbede390652ae4d7abca0ec164b397ff8408fbf005 Nov 25 09:00:33 crc kubenswrapper[4760]: I1125 09:00:33.154967 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:33 crc kubenswrapper[4760]: I1125 09:00:33.220755 4760 scope.go:117] "RemoveContainer" containerID="37de50c433c0633c27cda1eb5db0916787750c3256b9dddf18829eacf751ef26" Nov 25 09:00:33 crc kubenswrapper[4760]: I1125 09:00:33.228465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" event={"ID":"1b305350-e74d-4e9a-8af0-14e88ddfccc0","Type":"ContainerStarted","Data":"1fc1805a6cba9c72f03640dbede390652ae4d7abca0ec164b397ff8408fbf005"} Nov 25 09:00:33 crc kubenswrapper[4760]: I1125 09:00:33.241528 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f6791bbe-2777-4891-a7dd-7622d9af1bc9","Type":"ContainerStarted","Data":"47753af1d3512d1fdd740253a24749a88e2446a07aebb523c374696acf8c87d5"} Nov 25 09:00:33 crc kubenswrapper[4760]: I1125 09:00:33.243977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ce41bb06-b62f-47e2-bbdc-a833b0180ab0","Type":"ContainerStarted","Data":"41993344b9662fe142c905c49aa19536262b5149debda5ee0b637f9642a5e6f7"} Nov 25 09:00:33 crc kubenswrapper[4760]: I1125 09:00:33.246626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e3d6e790-be37-4105-8eed-61c98c6576b5","Type":"ContainerStarted","Data":"b7aa7933c0caad67a5ef3906a55e41ff77d06161382efc86f919c948119fa407"} Nov 25 09:00:34 crc kubenswrapper[4760]: I1125 09:00:34.263838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ce41bb06-b62f-47e2-bbdc-a833b0180ab0","Type":"ContainerStarted","Data":"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d"} Nov 25 09:00:34 crc kubenswrapper[4760]: I1125 09:00:34.272280 4760 generic.go:334] "Generic (PLEG): container finished" podID="1b305350-e74d-4e9a-8af0-14e88ddfccc0" containerID="10256135a490ac6582d88f6b5f0f612058cd6be2be7c0665d2f9ff72704ef5ed" exitCode=0 Nov 25 09:00:34 crc kubenswrapper[4760]: I1125 09:00:34.272438 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" event={"ID":"1b305350-e74d-4e9a-8af0-14e88ddfccc0","Type":"ContainerDied","Data":"10256135a490ac6582d88f6b5f0f612058cd6be2be7c0665d2f9ff72704ef5ed"} Nov 25 09:00:34 crc kubenswrapper[4760]: I1125 09:00:34.276857 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f6791bbe-2777-4891-a7dd-7622d9af1bc9","Type":"ContainerStarted","Data":"caa937628c52b4850f908062ac3421fa3ad6794c0a63a97ca6e09a2ae9f6714c"} Nov 25 09:00:34 crc kubenswrapper[4760]: I1125 09:00:34.705013 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:35 crc kubenswrapper[4760]: I1125 09:00:35.291051 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" event={"ID":"1b305350-e74d-4e9a-8af0-14e88ddfccc0","Type":"ContainerStarted","Data":"ec5bc350efed2332b1c51be5a60cb9700e2630f34f4485cfa3ff56d9b0e96e63"} Nov 25 09:00:35 crc kubenswrapper[4760]: I1125 09:00:35.291299 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:35 crc kubenswrapper[4760]: I1125 09:00:35.293975 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f6791bbe-2777-4891-a7dd-7622d9af1bc9","Type":"ContainerStarted","Data":"7d8164be378c049a6782a0ef56f3adb0db368bc95c650e350bbc1724515de79f"} Nov 25 09:00:35 crc kubenswrapper[4760]: I1125 09:00:35.298549 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ce41bb06-b62f-47e2-bbdc-a833b0180ab0","Type":"ContainerStarted","Data":"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed"} Nov 25 09:00:35 crc kubenswrapper[4760]: I1125 09:00:35.299677 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 09:00:35 crc kubenswrapper[4760]: I1125 09:00:35.327603 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" podStartSLOduration=4.327573117 podStartE2EDuration="4.327573117s" podCreationTimestamp="2025-11-25 09:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:00:35.316908293 +0000 UTC m=+2969.025939088" watchObservedRunningTime="2025-11-25 09:00:35.327573117 +0000 UTC m=+2969.036603912" Nov 25 09:00:35 crc kubenswrapper[4760]: I1125 09:00:35.342902 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.342880154 podStartE2EDuration="4.342880154s" podCreationTimestamp="2025-11-25 09:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:00:35.337719356 +0000 UTC m=+2969.046750151" watchObservedRunningTime="2025-11-25 09:00:35.342880154 +0000 UTC m=+2969.051910949" Nov 25 09:00:36 crc kubenswrapper[4760]: I1125 09:00:36.319595 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api-log" containerID="cri-o://f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d" gracePeriod=30 Nov 25 09:00:36 crc kubenswrapper[4760]: I1125 09:00:36.319650 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api" containerID="cri-o://6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed" gracePeriod=30 Nov 25 09:00:36 crc kubenswrapper[4760]: I1125 09:00:36.982312 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=5.2401515419999996 podStartE2EDuration="5.982290306s" podCreationTimestamp="2025-11-25 09:00:31 +0000 UTC" firstStartedPulling="2025-11-25 09:00:32.732661824 +0000 UTC m=+2966.441692609" lastFinishedPulling="2025-11-25 09:00:33.474800578 +0000 UTC m=+2967.183831373" observedRunningTime="2025-11-25 09:00:35.374537237 +0000 UTC m=+2969.083568032" watchObservedRunningTime="2025-11-25 09:00:36.982290306 +0000 UTC m=+2970.691321101" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.034300 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.177296 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9shhb\" (UniqueName: \"kubernetes.io/projected/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-kube-api-access-9shhb\") pod \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.177634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-etc-machine-id\") pod \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.177739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ce41bb06-b62f-47e2-bbdc-a833b0180ab0" (UID: "ce41bb06-b62f-47e2-bbdc-a833b0180ab0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.177976 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-scripts\") pod \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.178140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data\") pod \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.178274 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-logs\") pod \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.178423 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data-custom\") pod \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.178531 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-combined-ca-bundle\") pod \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\" (UID: \"ce41bb06-b62f-47e2-bbdc-a833b0180ab0\") " Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.178578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-logs" (OuterVolumeSpecName: "logs") pod "ce41bb06-b62f-47e2-bbdc-a833b0180ab0" (UID: "ce41bb06-b62f-47e2-bbdc-a833b0180ab0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.179379 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.179464 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.189735 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ce41bb06-b62f-47e2-bbdc-a833b0180ab0" (UID: "ce41bb06-b62f-47e2-bbdc-a833b0180ab0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.190098 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-kube-api-access-9shhb" (OuterVolumeSpecName: "kube-api-access-9shhb") pod "ce41bb06-b62f-47e2-bbdc-a833b0180ab0" (UID: "ce41bb06-b62f-47e2-bbdc-a833b0180ab0"). InnerVolumeSpecName "kube-api-access-9shhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.192333 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-scripts" (OuterVolumeSpecName: "scripts") pod "ce41bb06-b62f-47e2-bbdc-a833b0180ab0" (UID: "ce41bb06-b62f-47e2-bbdc-a833b0180ab0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.222867 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce41bb06-b62f-47e2-bbdc-a833b0180ab0" (UID: "ce41bb06-b62f-47e2-bbdc-a833b0180ab0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.244436 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data" (OuterVolumeSpecName: "config-data") pod "ce41bb06-b62f-47e2-bbdc-a833b0180ab0" (UID: "ce41bb06-b62f-47e2-bbdc-a833b0180ab0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.281373 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9shhb\" (UniqueName: \"kubernetes.io/projected/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-kube-api-access-9shhb\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.281421 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.281434 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.281447 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.281458 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce41bb06-b62f-47e2-bbdc-a833b0180ab0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.331895 4760 generic.go:334] "Generic (PLEG): container finished" podID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerID="6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed" exitCode=0 Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.331933 4760 generic.go:334] "Generic (PLEG): container finished" podID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerID="f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d" exitCode=143 Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.331953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.331993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ce41bb06-b62f-47e2-bbdc-a833b0180ab0","Type":"ContainerDied","Data":"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed"} Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.332106 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ce41bb06-b62f-47e2-bbdc-a833b0180ab0","Type":"ContainerDied","Data":"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d"} Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.332122 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ce41bb06-b62f-47e2-bbdc-a833b0180ab0","Type":"ContainerDied","Data":"41993344b9662fe142c905c49aa19536262b5149debda5ee0b637f9642a5e6f7"} Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.332141 4760 scope.go:117] "RemoveContainer" containerID="6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.375858 4760 scope.go:117] "RemoveContainer" containerID="f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.385239 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.408311 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.425883 4760 scope.go:117] "RemoveContainer" containerID="6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed" Nov 25 09:00:37 crc kubenswrapper[4760]: E1125 09:00:37.426262 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed\": container with ID starting with 6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed not found: ID does not exist" containerID="6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.426296 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed"} err="failed to get container status \"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed\": rpc error: code = NotFound desc = could not find container \"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed\": container with ID starting with 6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed not found: ID does not exist" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.426320 4760 scope.go:117] "RemoveContainer" containerID="f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d" Nov 25 09:00:37 crc kubenswrapper[4760]: E1125 09:00:37.426593 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d\": container with ID starting with f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d not found: ID does not exist" containerID="f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.426616 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d"} err="failed to get container status \"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d\": rpc error: code = NotFound desc = could not find container \"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d\": container with ID starting with f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d not found: ID does not exist" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.426631 4760 scope.go:117] "RemoveContainer" containerID="6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.427338 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed"} err="failed to get container status \"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed\": rpc error: code = NotFound desc = could not find container \"6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed\": container with ID starting with 6fe4d8546e070c38c02875e8b72ca2e7b94ecf57e37c7d97a7214a2558389eed not found: ID does not exist" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.427361 4760 scope.go:117] "RemoveContainer" containerID="f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.427849 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d"} err="failed to get container status \"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d\": rpc error: code = NotFound desc = could not find container \"f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d\": container with ID starting with f8dfbe8e1189499630a062567f257e41d2c971289b2e8074d24e637a8136bd2d not found: ID does not exist" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.435967 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:37 crc kubenswrapper[4760]: E1125 09:00:37.436459 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.436473 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api" Nov 25 09:00:37 crc kubenswrapper[4760]: E1125 09:00:37.436518 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api-log" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.436525 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api-log" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.436732 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api-log" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.436763 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" containerName="manila-api" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.437942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.441112 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.441578 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.444075 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.444580 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.588785 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-scripts\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.588829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc0b6e2-9204-474d-842c-c488ff0811a4-logs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.588865 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.588884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-public-tls-certs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.589008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.589034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmh2\" (UniqueName: \"kubernetes.io/projected/0cc0b6e2-9204-474d-842c-c488ff0811a4-kube-api-access-8nmh2\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.589053 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cc0b6e2-9204-474d-842c-c488ff0811a4-etc-machine-id\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.589083 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-config-data-custom\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.589099 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-config-data\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-scripts\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc0b6e2-9204-474d-842c-c488ff0811a4-logs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691702 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-public-tls-certs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691841 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmh2\" (UniqueName: \"kubernetes.io/projected/0cc0b6e2-9204-474d-842c-c488ff0811a4-kube-api-access-8nmh2\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691903 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cc0b6e2-9204-474d-842c-c488ff0811a4-etc-machine-id\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691946 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-config-data-custom\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.691973 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-config-data\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.692055 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0cc0b6e2-9204-474d-842c-c488ff0811a4-etc-machine-id\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.692471 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc0b6e2-9204-474d-842c-c488ff0811a4-logs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.697395 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-scripts\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.709612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.710123 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-internal-tls-certs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.710220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-config-data-custom\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.725894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmh2\" (UniqueName: \"kubernetes.io/projected/0cc0b6e2-9204-474d-842c-c488ff0811a4-kube-api-access-8nmh2\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.726133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-config-data\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.728185 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cc0b6e2-9204-474d-842c-c488ff0811a4-public-tls-certs\") pod \"manila-api-0\" (UID: \"0cc0b6e2-9204-474d-842c-c488ff0811a4\") " pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.826764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.841659 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.841935 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-central-agent" containerID="cri-o://e2a46bdc2fbac6741e12931c29eaad6684f7f65f0b1d98b01f3a5b613eb7368e" gracePeriod=30 Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.842014 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="proxy-httpd" containerID="cri-o://06ddfc3cea1a203e800d58620becb2c656291710f58ee64b3a0a4e4475e92b16" gracePeriod=30 Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.842039 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-notification-agent" containerID="cri-o://aedc630812a97871b6547f6ab3ab006899045c49d912efe4be0b59c71821e111" gracePeriod=30 Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.842039 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="sg-core" containerID="cri-o://d73cfaacbd00e5adf1c6b21a83bbbe7620706389a272f56e6420a709b5f5636e" gracePeriod=30 Nov 25 09:00:37 crc kubenswrapper[4760]: I1125 09:00:37.953593 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": read tcp 10.217.0.2:41428->10.217.0.193:3000: read: connection reset by peer" Nov 25 09:00:38 crc kubenswrapper[4760]: I1125 09:00:38.358084 4760 generic.go:334] "Generic (PLEG): container finished" podID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerID="06ddfc3cea1a203e800d58620becb2c656291710f58ee64b3a0a4e4475e92b16" exitCode=0 Nov 25 09:00:38 crc kubenswrapper[4760]: I1125 09:00:38.358918 4760 generic.go:334] "Generic (PLEG): container finished" podID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerID="d73cfaacbd00e5adf1c6b21a83bbbe7620706389a272f56e6420a709b5f5636e" exitCode=2 Nov 25 09:00:38 crc kubenswrapper[4760]: I1125 09:00:38.358159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerDied","Data":"06ddfc3cea1a203e800d58620becb2c656291710f58ee64b3a0a4e4475e92b16"} Nov 25 09:00:38 crc kubenswrapper[4760]: I1125 09:00:38.359155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerDied","Data":"d73cfaacbd00e5adf1c6b21a83bbbe7620706389a272f56e6420a709b5f5636e"} Nov 25 09:00:38 crc kubenswrapper[4760]: I1125 09:00:38.950611 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce41bb06-b62f-47e2-bbdc-a833b0180ab0" path="/var/lib/kubelet/pods/ce41bb06-b62f-47e2-bbdc-a833b0180ab0/volumes" Nov 25 09:00:39 crc kubenswrapper[4760]: I1125 09:00:39.372072 4760 generic.go:334] "Generic (PLEG): container finished" podID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerID="e2a46bdc2fbac6741e12931c29eaad6684f7f65f0b1d98b01f3a5b613eb7368e" exitCode=0 Nov 25 09:00:39 crc kubenswrapper[4760]: I1125 09:00:39.372113 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerDied","Data":"e2a46bdc2fbac6741e12931c29eaad6684f7f65f0b1d98b01f3a5b613eb7368e"} Nov 25 09:00:40 crc kubenswrapper[4760]: I1125 09:00:40.394281 4760 generic.go:334] "Generic (PLEG): container finished" podID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerID="aedc630812a97871b6547f6ab3ab006899045c49d912efe4be0b59c71821e111" exitCode=0 Nov 25 09:00:40 crc kubenswrapper[4760]: I1125 09:00:40.394374 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerDied","Data":"aedc630812a97871b6547f6ab3ab006899045c49d912efe4be0b59c71821e111"} Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.424517 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.570878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-ceilometer-tls-certs\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.570925 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-config-data\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.570950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-sg-core-conf-yaml\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.571045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-run-httpd\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.571062 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-scripts\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.571121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-log-httpd\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.571171 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-combined-ca-bundle\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.571200 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26wz8\" (UniqueName: \"kubernetes.io/projected/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-kube-api-access-26wz8\") pod \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\" (UID: \"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2\") " Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.572907 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.573125 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.621097 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-scripts" (OuterVolumeSpecName: "scripts") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.628542 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-kube-api-access-26wz8" (OuterVolumeSpecName: "kube-api-access-26wz8") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "kube-api-access-26wz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.684888 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.684922 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.684931 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.684940 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26wz8\" (UniqueName: \"kubernetes.io/projected/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-kube-api-access-26wz8\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.767730 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.786826 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.818165 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.827460 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.870355 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.889937 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.889959 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.917299 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-config-data" (OuterVolumeSpecName: "config-data") pod "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" (UID: "efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.963592 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 25 09:00:41 crc kubenswrapper[4760]: I1125 09:00:41.992371 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.390497 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6885d49d55-9mqqw" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.428984 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e3d6e790-be37-4105-8eed-61c98c6576b5","Type":"ContainerStarted","Data":"b7708dac53c629790af655f35b63c60641caf85046634ae7d554880c1863927f"} Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.445972 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0cc0b6e2-9204-474d-842c-c488ff0811a4","Type":"ContainerStarted","Data":"6b053e889bcd67d55138bc443a8d24152140817281ee4a8e80fddff428c7f4e4"} Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.469972 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc44c56c-4dzcm"] Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.470315 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerName="dnsmasq-dns" containerID="cri-o://9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd" gracePeriod=10 Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.478646 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2","Type":"ContainerDied","Data":"b0f0cffbecf5b5ceb5bdb36c09f2b787f968ea129a898dca2a667ab7e4a43db3"} Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.478706 4760 scope.go:117] "RemoveContainer" containerID="06ddfc3cea1a203e800d58620becb2c656291710f58ee64b3a0a4e4475e92b16" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.478917 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.701415 4760 scope.go:117] "RemoveContainer" containerID="d73cfaacbd00e5adf1c6b21a83bbbe7620706389a272f56e6420a709b5f5636e" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.723923 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.748127 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.760130 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:00:42 crc kubenswrapper[4760]: E1125 09:00:42.761468 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="proxy-httpd" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761491 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="proxy-httpd" Nov 25 09:00:42 crc kubenswrapper[4760]: E1125 09:00:42.761527 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-notification-agent" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761535 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-notification-agent" Nov 25 09:00:42 crc kubenswrapper[4760]: E1125 09:00:42.761557 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="sg-core" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761564 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="sg-core" Nov 25 09:00:42 crc kubenswrapper[4760]: E1125 09:00:42.761581 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-central-agent" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761588 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-central-agent" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761802 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-notification-agent" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761822 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="ceilometer-central-agent" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761847 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="sg-core" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.761864 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" containerName="proxy-httpd" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.767740 4760 scope.go:117] "RemoveContainer" containerID="aedc630812a97871b6547f6ab3ab006899045c49d912efe4be0b59c71821e111" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.769486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.773650 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.773814 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.773851 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.799060 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.873665 4760 scope.go:117] "RemoveContainer" containerID="e2a46bdc2fbac6741e12931c29eaad6684f7f65f0b1d98b01f3a5b613eb7368e" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919069 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-scripts\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919144 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919402 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-config-data\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwnw7\" (UniqueName: \"kubernetes.io/projected/ee431a64-bc53-4858-b53d-e051099965a1-kube-api-access-mwnw7\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919800 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.919845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:42 crc kubenswrapper[4760]: I1125 09:00:42.996840 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2" path="/var/lib/kubelet/pods/efe8cbf4-8fba-4695-9a9a-63e2ffc0c3d2/volumes" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.028632 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.028750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.028800 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-config-data\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.028877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwnw7\" (UniqueName: \"kubernetes.io/projected/ee431a64-bc53-4858-b53d-e051099965a1-kube-api-access-mwnw7\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.028920 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.028946 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.028994 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-scripts\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.029055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.029713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.031858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.039262 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.041338 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.042033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.053974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-config-data\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.061085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-scripts\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.080766 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwnw7\" (UniqueName: \"kubernetes.io/projected/ee431a64-bc53-4858-b53d-e051099965a1-kube-api-access-mwnw7\") pod \"ceilometer-0\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.105942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.254840 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.364747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-config\") pod \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.365981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2b97\" (UniqueName: \"kubernetes.io/projected/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-kube-api-access-n2b97\") pod \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.366982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-dns-svc\") pod \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.367180 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-nb\") pod \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.367532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-openstack-edpm-ipam\") pod \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.367657 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-sb\") pod \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\" (UID: \"27e51547-0b08-4cb3-8a61-1ecfc452fbdb\") " Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.374389 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-kube-api-access-n2b97" (OuterVolumeSpecName: "kube-api-access-n2b97") pod "27e51547-0b08-4cb3-8a61-1ecfc452fbdb" (UID: "27e51547-0b08-4cb3-8a61-1ecfc452fbdb"). InnerVolumeSpecName "kube-api-access-n2b97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.470961 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2b97\" (UniqueName: \"kubernetes.io/projected/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-kube-api-access-n2b97\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.484388 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "27e51547-0b08-4cb3-8a61-1ecfc452fbdb" (UID: "27e51547-0b08-4cb3-8a61-1ecfc452fbdb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.485521 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27e51547-0b08-4cb3-8a61-1ecfc452fbdb" (UID: "27e51547-0b08-4cb3-8a61-1ecfc452fbdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.491072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27e51547-0b08-4cb3-8a61-1ecfc452fbdb" (UID: "27e51547-0b08-4cb3-8a61-1ecfc452fbdb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.495407 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-config" (OuterVolumeSpecName: "config") pod "27e51547-0b08-4cb3-8a61-1ecfc452fbdb" (UID: "27e51547-0b08-4cb3-8a61-1ecfc452fbdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.500586 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27e51547-0b08-4cb3-8a61-1ecfc452fbdb" (UID: "27e51547-0b08-4cb3-8a61-1ecfc452fbdb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.521811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0cc0b6e2-9204-474d-842c-c488ff0811a4","Type":"ContainerStarted","Data":"4a9901d94f71fbc74b2434e214cbd8dd45a095502d5ff2219548159458425f9e"} Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.521872 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0cc0b6e2-9204-474d-842c-c488ff0811a4","Type":"ContainerStarted","Data":"bf19409909d68fe27e4498a515bae72f89f652d24ac6d405ed35a887a4c3082e"} Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.522509 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.534045 4760 generic.go:334] "Generic (PLEG): container finished" podID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerID="9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd" exitCode=0 Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.534208 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.536140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" event={"ID":"27e51547-0b08-4cb3-8a61-1ecfc452fbdb","Type":"ContainerDied","Data":"9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd"} Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.536208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" event={"ID":"27e51547-0b08-4cb3-8a61-1ecfc452fbdb","Type":"ContainerDied","Data":"c02877b0d39e2472098f1ff5b4c12fe3a6b619e599f045a616ec8d6c56644aab"} Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.536240 4760 scope.go:117] "RemoveContainer" containerID="9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.545846 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.545825494 podStartE2EDuration="6.545825494s" podCreationTimestamp="2025-11-25 09:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:00:43.541597763 +0000 UTC m=+2977.250628568" watchObservedRunningTime="2025-11-25 09:00:43.545825494 +0000 UTC m=+2977.254856289" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.564765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e3d6e790-be37-4105-8eed-61c98c6576b5","Type":"ContainerStarted","Data":"333ad11054748d798ec477a1df68b23441a1780a482785d5465525ee13f079a8"} Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.574000 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.574274 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.574349 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.574431 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.574503 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e51547-0b08-4cb3-8a61-1ecfc452fbdb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.595500 4760 scope.go:117] "RemoveContainer" containerID="cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.610710 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.045651115 podStartE2EDuration="12.610685874s" podCreationTimestamp="2025-11-25 09:00:31 +0000 UTC" firstStartedPulling="2025-11-25 09:00:32.554472751 +0000 UTC m=+2966.263503546" lastFinishedPulling="2025-11-25 09:00:41.11950752 +0000 UTC m=+2974.828538305" observedRunningTime="2025-11-25 09:00:43.595705487 +0000 UTC m=+2977.304736292" watchObservedRunningTime="2025-11-25 09:00:43.610685874 +0000 UTC m=+2977.319716669" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.639312 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc44c56c-4dzcm"] Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.644513 4760 scope.go:117] "RemoveContainer" containerID="9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd" Nov 25 09:00:43 crc kubenswrapper[4760]: E1125 09:00:43.646791 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd\": container with ID starting with 9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd not found: ID does not exist" containerID="9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.646947 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd"} err="failed to get container status \"9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd\": rpc error: code = NotFound desc = could not find container \"9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd\": container with ID starting with 9f1fba48ed9732ee45d23e198ef0237319129f5b6fd800655433299ffd08b9bd not found: ID does not exist" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.647062 4760 scope.go:117] "RemoveContainer" containerID="cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56" Nov 25 09:00:43 crc kubenswrapper[4760]: E1125 09:00:43.647681 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56\": container with ID starting with cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56 not found: ID does not exist" containerID="cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.647736 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56"} err="failed to get container status \"cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56\": rpc error: code = NotFound desc = could not find container \"cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56\": container with ID starting with cf38af97447a60baaff8e3bcde51588e9f245837d41db300b415c779c232fb56 not found: ID does not exist" Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.662000 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dc44c56c-4dzcm"] Nov 25 09:00:43 crc kubenswrapper[4760]: W1125 09:00:43.668945 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee431a64_bc53_4858_b53d_e051099965a1.slice/crio-624c2d912148fa0960c0cc0b6fa65cf09605db084b30fdc0f994448711c348fe WatchSource:0}: Error finding container 624c2d912148fa0960c0cc0b6fa65cf09605db084b30fdc0f994448711c348fe: Status 404 returned error can't find the container with id 624c2d912148fa0960c0cc0b6fa65cf09605db084b30fdc0f994448711c348fe Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.672984 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:00:43 crc kubenswrapper[4760]: I1125 09:00:43.676504 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:00:44 crc kubenswrapper[4760]: I1125 09:00:44.573644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerStarted","Data":"624c2d912148fa0960c0cc0b6fa65cf09605db084b30fdc0f994448711c348fe"} Nov 25 09:00:44 crc kubenswrapper[4760]: I1125 09:00:44.949306 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" path="/var/lib/kubelet/pods/27e51547-0b08-4cb3-8a61-1ecfc452fbdb/volumes" Nov 25 09:00:45 crc kubenswrapper[4760]: I1125 09:00:45.588692 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerStarted","Data":"0eb7a25e447cd433bcbf6584aa988810c0bdab57265e24d95c556d71b283f8c7"} Nov 25 09:00:45 crc kubenswrapper[4760]: I1125 09:00:45.839330 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:00:46 crc kubenswrapper[4760]: I1125 09:00:46.599649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerStarted","Data":"fc7cf9bbb30cc77e4abfb6e3271c552fda99889b73c5841e2ecb3abd37c0d623"} Nov 25 09:00:47 crc kubenswrapper[4760]: I1125 09:00:47.835063 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6dc44c56c-4dzcm" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: i/o timeout" Nov 25 09:00:48 crc kubenswrapper[4760]: I1125 09:00:48.626068 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerStarted","Data":"67f33e898d65ddb339fddb37a852acd85e83d6fde99e2b13ac60b1d5d5440f89"} Nov 25 09:00:49 crc kubenswrapper[4760]: I1125 09:00:49.638068 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerStarted","Data":"400ae8dd5257f0d13d17d074ec0f3511ca549403634c0cfca5969058bcb578d1"} Nov 25 09:00:49 crc kubenswrapper[4760]: I1125 09:00:49.638518 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 09:00:49 crc kubenswrapper[4760]: I1125 09:00:49.638356 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-notification-agent" containerID="cri-o://fc7cf9bbb30cc77e4abfb6e3271c552fda99889b73c5841e2ecb3abd37c0d623" gracePeriod=30 Nov 25 09:00:49 crc kubenswrapper[4760]: I1125 09:00:49.638315 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-central-agent" containerID="cri-o://0eb7a25e447cd433bcbf6584aa988810c0bdab57265e24d95c556d71b283f8c7" gracePeriod=30 Nov 25 09:00:49 crc kubenswrapper[4760]: I1125 09:00:49.638382 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="sg-core" containerID="cri-o://67f33e898d65ddb339fddb37a852acd85e83d6fde99e2b13ac60b1d5d5440f89" gracePeriod=30 Nov 25 09:00:49 crc kubenswrapper[4760]: I1125 09:00:49.638413 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="proxy-httpd" containerID="cri-o://400ae8dd5257f0d13d17d074ec0f3511ca549403634c0cfca5969058bcb578d1" gracePeriod=30 Nov 25 09:00:50 crc kubenswrapper[4760]: I1125 09:00:50.660361 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee431a64-bc53-4858-b53d-e051099965a1" containerID="67f33e898d65ddb339fddb37a852acd85e83d6fde99e2b13ac60b1d5d5440f89" exitCode=2 Nov 25 09:00:50 crc kubenswrapper[4760]: I1125 09:00:50.660675 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee431a64-bc53-4858-b53d-e051099965a1" containerID="fc7cf9bbb30cc77e4abfb6e3271c552fda99889b73c5841e2ecb3abd37c0d623" exitCode=0 Nov 25 09:00:50 crc kubenswrapper[4760]: I1125 09:00:50.660702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerDied","Data":"67f33e898d65ddb339fddb37a852acd85e83d6fde99e2b13ac60b1d5d5440f89"} Nov 25 09:00:50 crc kubenswrapper[4760]: I1125 09:00:50.660745 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerDied","Data":"fc7cf9bbb30cc77e4abfb6e3271c552fda99889b73c5841e2ecb3abd37c0d623"} Nov 25 09:00:51 crc kubenswrapper[4760]: I1125 09:00:51.682159 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee431a64-bc53-4858-b53d-e051099965a1" containerID="0eb7a25e447cd433bcbf6584aa988810c0bdab57265e24d95c556d71b283f8c7" exitCode=0 Nov 25 09:00:51 crc kubenswrapper[4760]: I1125 09:00:51.682223 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerDied","Data":"0eb7a25e447cd433bcbf6584aa988810c0bdab57265e24d95c556d71b283f8c7"} Nov 25 09:00:51 crc kubenswrapper[4760]: I1125 09:00:51.827022 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.371115 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.400143 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.770894805 podStartE2EDuration="11.400127126s" podCreationTimestamp="2025-11-25 09:00:42 +0000 UTC" firstStartedPulling="2025-11-25 09:00:43.672614211 +0000 UTC m=+2977.381645006" lastFinishedPulling="2025-11-25 09:00:49.301846532 +0000 UTC m=+2983.010877327" observedRunningTime="2025-11-25 09:00:49.678692704 +0000 UTC m=+2983.387723519" watchObservedRunningTime="2025-11-25 09:00:53.400127126 +0000 UTC m=+2987.109157921" Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.451598 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.509329 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.592016 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.706796 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="manila-scheduler" containerID="cri-o://caa937628c52b4850f908062ac3421fa3ad6794c0a63a97ca6e09a2ae9f6714c" gracePeriod=30 Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.707308 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="probe" containerID="cri-o://7d8164be378c049a6782a0ef56f3adb0db368bc95c650e350bbc1724515de79f" gracePeriod=30 Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.707433 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="manila-share" containerID="cri-o://b7708dac53c629790af655f35b63c60641caf85046634ae7d554880c1863927f" gracePeriod=30 Nov 25 09:00:53 crc kubenswrapper[4760]: I1125 09:00:53.707619 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="probe" containerID="cri-o://333ad11054748d798ec477a1df68b23441a1780a482785d5465525ee13f079a8" gracePeriod=30 Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.716147 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerID="333ad11054748d798ec477a1df68b23441a1780a482785d5465525ee13f079a8" exitCode=0 Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.716907 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerID="b7708dac53c629790af655f35b63c60641caf85046634ae7d554880c1863927f" exitCode=1 Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.716226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e3d6e790-be37-4105-8eed-61c98c6576b5","Type":"ContainerDied","Data":"333ad11054748d798ec477a1df68b23441a1780a482785d5465525ee13f079a8"} Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.716971 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e3d6e790-be37-4105-8eed-61c98c6576b5","Type":"ContainerDied","Data":"b7708dac53c629790af655f35b63c60641caf85046634ae7d554880c1863927f"} Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.716983 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"e3d6e790-be37-4105-8eed-61c98c6576b5","Type":"ContainerDied","Data":"b7aa7933c0caad67a5ef3906a55e41ff77d06161382efc86f919c948119fa407"} Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.716994 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7aa7933c0caad67a5ef3906a55e41ff77d06161382efc86f919c948119fa407" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.719061 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerID="7d8164be378c049a6782a0ef56f3adb0db368bc95c650e350bbc1724515de79f" exitCode=0 Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.719105 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f6791bbe-2777-4891-a7dd-7622d9af1bc9","Type":"ContainerDied","Data":"7d8164be378c049a6782a0ef56f3adb0db368bc95c650e350bbc1724515de79f"} Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.759654 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.794948 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data-custom\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795001 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-var-lib-manila\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795090 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795179 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7s9r\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-kube-api-access-j7s9r\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795301 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-ceph\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795370 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-combined-ca-bundle\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795422 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-scripts\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.795451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-etc-machine-id\") pod \"e3d6e790-be37-4105-8eed-61c98c6576b5\" (UID: \"e3d6e790-be37-4105-8eed-61c98c6576b5\") " Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.796067 4760 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-var-lib-manila\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.796112 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.801752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.801772 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-kube-api-access-j7s9r" (OuterVolumeSpecName: "kube-api-access-j7s9r") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "kube-api-access-j7s9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.802904 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-scripts" (OuterVolumeSpecName: "scripts") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.803295 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-ceph" (OuterVolumeSpecName: "ceph") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.861088 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.898550 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7s9r\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-kube-api-access-j7s9r\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.898580 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3d6e790-be37-4105-8eed-61c98c6576b5-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.898589 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.898598 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.898606 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3d6e790-be37-4105-8eed-61c98c6576b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.898614 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:54 crc kubenswrapper[4760]: I1125 09:00:54.907571 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data" (OuterVolumeSpecName: "config-data") pod "e3d6e790-be37-4105-8eed-61c98c6576b5" (UID: "e3d6e790-be37-4105-8eed-61c98c6576b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.000966 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d6e790-be37-4105-8eed-61c98c6576b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.728699 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.754803 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.764535 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.782952 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:55 crc kubenswrapper[4760]: E1125 09:00:55.783662 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="probe" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.783694 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="probe" Nov 25 09:00:55 crc kubenswrapper[4760]: E1125 09:00:55.783761 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="manila-share" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.783776 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="manila-share" Nov 25 09:00:55 crc kubenswrapper[4760]: E1125 09:00:55.783800 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerName="dnsmasq-dns" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.783814 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerName="dnsmasq-dns" Nov 25 09:00:55 crc kubenswrapper[4760]: E1125 09:00:55.783866 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerName="init" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.783880 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerName="init" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.784237 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="manila-share" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.784302 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" containerName="probe" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.784341 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e51547-0b08-4cb3-8a61-1ecfc452fbdb" containerName="dnsmasq-dns" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.786502 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.788479 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.793662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.815730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-scripts\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.815767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-ceph\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.815807 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.815861 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.815930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.815966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.816047 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkb4\" (UniqueName: \"kubernetes.io/projected/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-kube-api-access-4rkb4\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.816078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-config-data\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-scripts\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918270 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-ceph\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918421 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rkb4\" (UniqueName: \"kubernetes.io/projected/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-kube-api-access-4rkb4\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.918508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-config-data\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.919296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.919329 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.924283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.925111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-config-data\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.925536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.925741 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-scripts\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.926167 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-ceph\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:55 crc kubenswrapper[4760]: I1125 09:00:55.937849 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rkb4\" (UniqueName: \"kubernetes.io/projected/4424df0c-a7e7-4880-aeb3-e8beaaa57b80-kube-api-access-4rkb4\") pod \"manila-share-share1-0\" (UID: \"4424df0c-a7e7-4880-aeb3-e8beaaa57b80\") " pod="openstack/manila-share-share1-0" Nov 25 09:00:56 crc kubenswrapper[4760]: I1125 09:00:56.127876 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 09:00:56 crc kubenswrapper[4760]: I1125 09:00:56.673163 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 09:00:56 crc kubenswrapper[4760]: W1125 09:00:56.681950 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4424df0c_a7e7_4880_aeb3_e8beaaa57b80.slice/crio-69ab6341552e4fbc1b57de3fc0f5219212faa4477547346a3b84309d2d4b7fcd WatchSource:0}: Error finding container 69ab6341552e4fbc1b57de3fc0f5219212faa4477547346a3b84309d2d4b7fcd: Status 404 returned error can't find the container with id 69ab6341552e4fbc1b57de3fc0f5219212faa4477547346a3b84309d2d4b7fcd Nov 25 09:00:56 crc kubenswrapper[4760]: I1125 09:00:56.738168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4424df0c-a7e7-4880-aeb3-e8beaaa57b80","Type":"ContainerStarted","Data":"69ab6341552e4fbc1b57de3fc0f5219212faa4477547346a3b84309d2d4b7fcd"} Nov 25 09:00:56 crc kubenswrapper[4760]: I1125 09:00:56.953442 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d6e790-be37-4105-8eed-61c98c6576b5" path="/var/lib/kubelet/pods/e3d6e790-be37-4105-8eed-61c98c6576b5/volumes" Nov 25 09:00:57 crc kubenswrapper[4760]: I1125 09:00:57.751125 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4424df0c-a7e7-4880-aeb3-e8beaaa57b80","Type":"ContainerStarted","Data":"4a08819927c3c0e49dbaff01d910696503c1633a3ab713f8360508b09315ed30"} Nov 25 09:00:58 crc kubenswrapper[4760]: I1125 09:00:58.768702 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerID="caa937628c52b4850f908062ac3421fa3ad6794c0a63a97ca6e09a2ae9f6714c" exitCode=0 Nov 25 09:00:58 crc kubenswrapper[4760]: I1125 09:00:58.768931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f6791bbe-2777-4891-a7dd-7622d9af1bc9","Type":"ContainerDied","Data":"caa937628c52b4850f908062ac3421fa3ad6794c0a63a97ca6e09a2ae9f6714c"} Nov 25 09:00:58 crc kubenswrapper[4760]: I1125 09:00:58.773048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"4424df0c-a7e7-4880-aeb3-e8beaaa57b80","Type":"ContainerStarted","Data":"603ad63251ecadae981cb741608cebf1dc35062caeb115a1764169e164932142"} Nov 25 09:00:58 crc kubenswrapper[4760]: I1125 09:00:58.806658 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.806626486 podStartE2EDuration="3.806626486s" podCreationTimestamp="2025-11-25 09:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:00:58.804843255 +0000 UTC m=+2992.513874060" watchObservedRunningTime="2025-11-25 09:00:58.806626486 +0000 UTC m=+2992.515657281" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.164731 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.283819 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/f6791bbe-2777-4891-a7dd-7622d9af1bc9-kube-api-access-swhp7\") pod \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.283885 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6791bbe-2777-4891-a7dd-7622d9af1bc9-etc-machine-id\") pod \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.284013 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-combined-ca-bundle\") pod \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.284085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data\") pod \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.284127 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-scripts\") pod \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.284153 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data-custom\") pod \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\" (UID: \"f6791bbe-2777-4891-a7dd-7622d9af1bc9\") " Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.284138 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6791bbe-2777-4891-a7dd-7622d9af1bc9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f6791bbe-2777-4891-a7dd-7622d9af1bc9" (UID: "f6791bbe-2777-4891-a7dd-7622d9af1bc9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.284579 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6791bbe-2777-4891-a7dd-7622d9af1bc9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.290517 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f6791bbe-2777-4891-a7dd-7622d9af1bc9" (UID: "f6791bbe-2777-4891-a7dd-7622d9af1bc9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.291130 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-scripts" (OuterVolumeSpecName: "scripts") pod "f6791bbe-2777-4891-a7dd-7622d9af1bc9" (UID: "f6791bbe-2777-4891-a7dd-7622d9af1bc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.291233 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.294978 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6791bbe-2777-4891-a7dd-7622d9af1bc9-kube-api-access-swhp7" (OuterVolumeSpecName: "kube-api-access-swhp7") pod "f6791bbe-2777-4891-a7dd-7622d9af1bc9" (UID: "f6791bbe-2777-4891-a7dd-7622d9af1bc9"). InnerVolumeSpecName "kube-api-access-swhp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.397289 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.397337 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.397358 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swhp7\" (UniqueName: \"kubernetes.io/projected/f6791bbe-2777-4891-a7dd-7622d9af1bc9-kube-api-access-swhp7\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.400560 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6791bbe-2777-4891-a7dd-7622d9af1bc9" (UID: "f6791bbe-2777-4891-a7dd-7622d9af1bc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.478879 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data" (OuterVolumeSpecName: "config-data") pod "f6791bbe-2777-4891-a7dd-7622d9af1bc9" (UID: "f6791bbe-2777-4891-a7dd-7622d9af1bc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.499297 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.499341 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6791bbe-2777-4891-a7dd-7622d9af1bc9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.783769 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.783767 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f6791bbe-2777-4891-a7dd-7622d9af1bc9","Type":"ContainerDied","Data":"47753af1d3512d1fdd740253a24749a88e2446a07aebb523c374696acf8c87d5"} Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.783915 4760 scope.go:117] "RemoveContainer" containerID="7d8164be378c049a6782a0ef56f3adb0db368bc95c650e350bbc1724515de79f" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.811992 4760 scope.go:117] "RemoveContainer" containerID="caa937628c52b4850f908062ac3421fa3ad6794c0a63a97ca6e09a2ae9f6714c" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.820560 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.833208 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.845812 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:59 crc kubenswrapper[4760]: E1125 09:00:59.846267 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="manila-scheduler" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.846285 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="manila-scheduler" Nov 25 09:00:59 crc kubenswrapper[4760]: E1125 09:00:59.846311 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="probe" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.846317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="probe" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.846533 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="manila-scheduler" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.846553 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" containerName="probe" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.847606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.849960 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.857320 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.906739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmbb\" (UniqueName: \"kubernetes.io/projected/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-kube-api-access-nkmbb\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.907144 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-scripts\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.907176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.907241 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.907282 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-config-data\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:00:59 crc kubenswrapper[4760]: I1125 09:00:59.907308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.009278 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-scripts\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.009340 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.009397 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.009416 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-config-data\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.009447 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.009532 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmbb\" (UniqueName: \"kubernetes.io/projected/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-kube-api-access-nkmbb\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.009926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.013678 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.015234 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-scripts\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.015945 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-config-data\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.019710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.032764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmbb\" (UniqueName: \"kubernetes.io/projected/f5b0fe2e-7460-4e1d-85f9-5cccfba89817-kube-api-access-nkmbb\") pod \"manila-scheduler-0\" (UID: \"f5b0fe2e-7460-4e1d-85f9-5cccfba89817\") " pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.145315 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401021-hxq6l"] Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.146644 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.158773 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401021-hxq6l"] Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.179933 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.213182 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsr9c\" (UniqueName: \"kubernetes.io/projected/54e54192-6eff-4b00-a1f6-f9290cb87eca-kube-api-access-xsr9c\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.213321 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-combined-ca-bundle\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.213410 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-config-data\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.213553 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-fernet-keys\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.315562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-fernet-keys\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.315982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsr9c\" (UniqueName: \"kubernetes.io/projected/54e54192-6eff-4b00-a1f6-f9290cb87eca-kube-api-access-xsr9c\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.316027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-combined-ca-bundle\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.316117 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-config-data\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.323578 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-config-data\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.323609 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-combined-ca-bundle\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.324416 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-fernet-keys\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.338499 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsr9c\" (UniqueName: \"kubernetes.io/projected/54e54192-6eff-4b00-a1f6-f9290cb87eca-kube-api-access-xsr9c\") pod \"keystone-cron-29401021-hxq6l\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.468866 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.628410 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 09:01:00 crc kubenswrapper[4760]: W1125 09:01:00.635566 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5b0fe2e_7460_4e1d_85f9_5cccfba89817.slice/crio-23d36d40123afc29959dcd63bdbaf92d3c5e5dc194528c1f0c9f05aebe437011 WatchSource:0}: Error finding container 23d36d40123afc29959dcd63bdbaf92d3c5e5dc194528c1f0c9f05aebe437011: Status 404 returned error can't find the container with id 23d36d40123afc29959dcd63bdbaf92d3c5e5dc194528c1f0c9f05aebe437011 Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.799352 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f5b0fe2e-7460-4e1d-85f9-5cccfba89817","Type":"ContainerStarted","Data":"23d36d40123afc29959dcd63bdbaf92d3c5e5dc194528c1f0c9f05aebe437011"} Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.906667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401021-hxq6l"] Nov 25 09:01:00 crc kubenswrapper[4760]: W1125 09:01:00.909026 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54e54192_6eff_4b00_a1f6_f9290cb87eca.slice/crio-7b6ff6a4d4bedf7378be60cdceef2ab99615b9e3aef8b7b517067392113f7a8b WatchSource:0}: Error finding container 7b6ff6a4d4bedf7378be60cdceef2ab99615b9e3aef8b7b517067392113f7a8b: Status 404 returned error can't find the container with id 7b6ff6a4d4bedf7378be60cdceef2ab99615b9e3aef8b7b517067392113f7a8b Nov 25 09:01:00 crc kubenswrapper[4760]: I1125 09:01:00.955066 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6791bbe-2777-4891-a7dd-7622d9af1bc9" path="/var/lib/kubelet/pods/f6791bbe-2777-4891-a7dd-7622d9af1bc9/volumes" Nov 25 09:01:01 crc kubenswrapper[4760]: I1125 09:01:01.811701 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-hxq6l" event={"ID":"54e54192-6eff-4b00-a1f6-f9290cb87eca","Type":"ContainerStarted","Data":"304d7a2d9c591fc9838a2220ffba28d3e9201a10d8893fcd94b529b7ac24a3ea"} Nov 25 09:01:01 crc kubenswrapper[4760]: I1125 09:01:01.812051 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-hxq6l" event={"ID":"54e54192-6eff-4b00-a1f6-f9290cb87eca","Type":"ContainerStarted","Data":"7b6ff6a4d4bedf7378be60cdceef2ab99615b9e3aef8b7b517067392113f7a8b"} Nov 25 09:01:01 crc kubenswrapper[4760]: I1125 09:01:01.816332 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f5b0fe2e-7460-4e1d-85f9-5cccfba89817","Type":"ContainerStarted","Data":"093c977e72b476fb26bafc77895417d2bc9475d9f45103819e48619d732110dc"} Nov 25 09:01:02 crc kubenswrapper[4760]: I1125 09:01:02.828040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"f5b0fe2e-7460-4e1d-85f9-5cccfba89817","Type":"ContainerStarted","Data":"c6d7f3ac367bc03d7d4626e8dc8da98a2df09b3c7769cf39251a22e0556f5078"} Nov 25 09:01:02 crc kubenswrapper[4760]: I1125 09:01:02.849477 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.849459427 podStartE2EDuration="3.849459427s" podCreationTimestamp="2025-11-25 09:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:01:02.849034285 +0000 UTC m=+2996.558065080" watchObservedRunningTime="2025-11-25 09:01:02.849459427 +0000 UTC m=+2996.558490222" Nov 25 09:01:02 crc kubenswrapper[4760]: I1125 09:01:02.854766 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401021-hxq6l" podStartSLOduration=2.854751188 podStartE2EDuration="2.854751188s" podCreationTimestamp="2025-11-25 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:01:01.827907813 +0000 UTC m=+2995.536938608" watchObservedRunningTime="2025-11-25 09:01:02.854751188 +0000 UTC m=+2996.563781983" Nov 25 09:01:03 crc kubenswrapper[4760]: I1125 09:01:03.836228 4760 generic.go:334] "Generic (PLEG): container finished" podID="54e54192-6eff-4b00-a1f6-f9290cb87eca" containerID="304d7a2d9c591fc9838a2220ffba28d3e9201a10d8893fcd94b529b7ac24a3ea" exitCode=0 Nov 25 09:01:03 crc kubenswrapper[4760]: I1125 09:01:03.836309 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-hxq6l" event={"ID":"54e54192-6eff-4b00-a1f6-f9290cb87eca","Type":"ContainerDied","Data":"304d7a2d9c591fc9838a2220ffba28d3e9201a10d8893fcd94b529b7ac24a3ea"} Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.180922 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.326774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-combined-ca-bundle\") pod \"54e54192-6eff-4b00-a1f6-f9290cb87eca\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.326960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-config-data\") pod \"54e54192-6eff-4b00-a1f6-f9290cb87eca\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.327040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-fernet-keys\") pod \"54e54192-6eff-4b00-a1f6-f9290cb87eca\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.327074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsr9c\" (UniqueName: \"kubernetes.io/projected/54e54192-6eff-4b00-a1f6-f9290cb87eca-kube-api-access-xsr9c\") pod \"54e54192-6eff-4b00-a1f6-f9290cb87eca\" (UID: \"54e54192-6eff-4b00-a1f6-f9290cb87eca\") " Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.332559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e54192-6eff-4b00-a1f6-f9290cb87eca-kube-api-access-xsr9c" (OuterVolumeSpecName: "kube-api-access-xsr9c") pod "54e54192-6eff-4b00-a1f6-f9290cb87eca" (UID: "54e54192-6eff-4b00-a1f6-f9290cb87eca"). InnerVolumeSpecName "kube-api-access-xsr9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.334035 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "54e54192-6eff-4b00-a1f6-f9290cb87eca" (UID: "54e54192-6eff-4b00-a1f6-f9290cb87eca"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.359479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54e54192-6eff-4b00-a1f6-f9290cb87eca" (UID: "54e54192-6eff-4b00-a1f6-f9290cb87eca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.382679 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-config-data" (OuterVolumeSpecName: "config-data") pod "54e54192-6eff-4b00-a1f6-f9290cb87eca" (UID: "54e54192-6eff-4b00-a1f6-f9290cb87eca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.430388 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.430436 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.430453 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsr9c\" (UniqueName: \"kubernetes.io/projected/54e54192-6eff-4b00-a1f6-f9290cb87eca-kube-api-access-xsr9c\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.430467 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e54192-6eff-4b00-a1f6-f9290cb87eca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.855408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-hxq6l" event={"ID":"54e54192-6eff-4b00-a1f6-f9290cb87eca","Type":"ContainerDied","Data":"7b6ff6a4d4bedf7378be60cdceef2ab99615b9e3aef8b7b517067392113f7a8b"} Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.855737 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b6ff6a4d4bedf7378be60cdceef2ab99615b9e3aef8b7b517067392113f7a8b" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:05.855473 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-hxq6l" Nov 25 09:01:08 crc kubenswrapper[4760]: I1125 09:01:06.128368 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 09:01:10 crc kubenswrapper[4760]: I1125 09:01:10.180493 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 25 09:01:13 crc kubenswrapper[4760]: I1125 09:01:13.114533 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 09:01:17 crc kubenswrapper[4760]: I1125 09:01:17.756709 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 25 09:01:19 crc kubenswrapper[4760]: I1125 09:01:19.992329 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee431a64-bc53-4858-b53d-e051099965a1" containerID="400ae8dd5257f0d13d17d074ec0f3511ca549403634c0cfca5969058bcb578d1" exitCode=137 Nov 25 09:01:19 crc kubenswrapper[4760]: I1125 09:01:19.992431 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerDied","Data":"400ae8dd5257f0d13d17d074ec0f3511ca549403634c0cfca5969058bcb578d1"} Nov 25 09:01:19 crc kubenswrapper[4760]: I1125 09:01:19.992825 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee431a64-bc53-4858-b53d-e051099965a1","Type":"ContainerDied","Data":"624c2d912148fa0960c0cc0b6fa65cf09605db084b30fdc0f994448711c348fe"} Nov 25 09:01:19 crc kubenswrapper[4760]: I1125 09:01:19.992842 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624c2d912148fa0960c0cc0b6fa65cf09605db084b30fdc0f994448711c348fe" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.040041 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.162715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-scripts\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.162814 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-config-data\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.162898 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-log-httpd\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.162925 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-combined-ca-bundle\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.162995 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-sg-core-conf-yaml\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.163026 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwnw7\" (UniqueName: \"kubernetes.io/projected/ee431a64-bc53-4858-b53d-e051099965a1-kube-api-access-mwnw7\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.163052 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-ceilometer-tls-certs\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.163139 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-run-httpd\") pod \"ee431a64-bc53-4858-b53d-e051099965a1\" (UID: \"ee431a64-bc53-4858-b53d-e051099965a1\") " Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.163497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.163731 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.163900 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.163927 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee431a64-bc53-4858-b53d-e051099965a1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.169054 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee431a64-bc53-4858-b53d-e051099965a1-kube-api-access-mwnw7" (OuterVolumeSpecName: "kube-api-access-mwnw7") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "kube-api-access-mwnw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.169109 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-scripts" (OuterVolumeSpecName: "scripts") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.197263 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.217643 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.240106 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.265485 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.265574 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.265595 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.265607 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwnw7\" (UniqueName: \"kubernetes.io/projected/ee431a64-bc53-4858-b53d-e051099965a1-kube-api-access-mwnw7\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.265617 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.275621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-config-data" (OuterVolumeSpecName: "config-data") pod "ee431a64-bc53-4858-b53d-e051099965a1" (UID: "ee431a64-bc53-4858-b53d-e051099965a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:20 crc kubenswrapper[4760]: I1125 09:01:20.368378 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee431a64-bc53-4858-b53d-e051099965a1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.000615 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.030237 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.038524 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.059288 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:01:21 crc kubenswrapper[4760]: E1125 09:01:21.059966 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-notification-agent" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.059998 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-notification-agent" Nov 25 09:01:21 crc kubenswrapper[4760]: E1125 09:01:21.060024 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-central-agent" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060037 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-central-agent" Nov 25 09:01:21 crc kubenswrapper[4760]: E1125 09:01:21.060088 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e54192-6eff-4b00-a1f6-f9290cb87eca" containerName="keystone-cron" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060100 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e54192-6eff-4b00-a1f6-f9290cb87eca" containerName="keystone-cron" Nov 25 09:01:21 crc kubenswrapper[4760]: E1125 09:01:21.060119 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="sg-core" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060131 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="sg-core" Nov 25 09:01:21 crc kubenswrapper[4760]: E1125 09:01:21.060146 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="proxy-httpd" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060158 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="proxy-httpd" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060483 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="sg-core" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060519 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-notification-agent" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060554 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="ceilometer-central-agent" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060567 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e54192-6eff-4b00-a1f6-f9290cb87eca" containerName="keystone-cron" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.060589 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee431a64-bc53-4858-b53d-e051099965a1" containerName="proxy-httpd" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.063531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.069080 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.069124 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.070783 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.071989 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183817 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183838 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-scripts\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a55ce36-9d78-4311-a68e-507467c7a1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183912 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-config-data\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183956 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a55ce36-9d78-4311-a68e-507467c7a1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.183988 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5zlw\" (UniqueName: \"kubernetes.io/projected/4a55ce36-9d78-4311-a68e-507467c7a1ec-kube-api-access-j5zlw\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.285479 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-config-data\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.285618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a55ce36-9d78-4311-a68e-507467c7a1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.285692 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5zlw\" (UniqueName: \"kubernetes.io/projected/4a55ce36-9d78-4311-a68e-507467c7a1ec-kube-api-access-j5zlw\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.285808 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.285850 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.285905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.285957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-scripts\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.286041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a55ce36-9d78-4311-a68e-507467c7a1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.286879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a55ce36-9d78-4311-a68e-507467c7a1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.288903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a55ce36-9d78-4311-a68e-507467c7a1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.290830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.291360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-config-data\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.292424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.293828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.296710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a55ce36-9d78-4311-a68e-507467c7a1ec-scripts\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.305065 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5zlw\" (UniqueName: \"kubernetes.io/projected/4a55ce36-9d78-4311-a68e-507467c7a1ec-kube-api-access-j5zlw\") pod \"ceilometer-0\" (UID: \"4a55ce36-9d78-4311-a68e-507467c7a1ec\") " pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.381328 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.830485 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 25 09:01:21 crc kubenswrapper[4760]: I1125 09:01:21.851124 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 09:01:22 crc kubenswrapper[4760]: I1125 09:01:22.010033 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a55ce36-9d78-4311-a68e-507467c7a1ec","Type":"ContainerStarted","Data":"d820125cf5df0063fc18aa30c540c02fd95f98d194ad288dd1128783b70d86a2"} Nov 25 09:01:22 crc kubenswrapper[4760]: I1125 09:01:22.950204 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee431a64-bc53-4858-b53d-e051099965a1" path="/var/lib/kubelet/pods/ee431a64-bc53-4858-b53d-e051099965a1/volumes" Nov 25 09:01:23 crc kubenswrapper[4760]: I1125 09:01:23.028394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a55ce36-9d78-4311-a68e-507467c7a1ec","Type":"ContainerStarted","Data":"a2a42c2a2c56ac0490794766e3db947bc743da7b7414c13eae9ab534c90832a2"} Nov 25 09:01:24 crc kubenswrapper[4760]: I1125 09:01:24.039259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a55ce36-9d78-4311-a68e-507467c7a1ec","Type":"ContainerStarted","Data":"b9436c46bed6e08e57441fb0d4c07135952f0dc4c15ea4f9632948d0f0c0d062"} Nov 25 09:01:25 crc kubenswrapper[4760]: I1125 09:01:25.049826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a55ce36-9d78-4311-a68e-507467c7a1ec","Type":"ContainerStarted","Data":"c31baa4317c300d23bb8160a54b99723ec132b280618870c42c9bad88b3bdcb2"} Nov 25 09:01:26 crc kubenswrapper[4760]: I1125 09:01:26.063911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a55ce36-9d78-4311-a68e-507467c7a1ec","Type":"ContainerStarted","Data":"424acb654178584c111cc4601406d6bb576d66717c913b6b252761e304462665"} Nov 25 09:01:26 crc kubenswrapper[4760]: I1125 09:01:26.064404 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 09:01:26 crc kubenswrapper[4760]: I1125 09:01:26.103211 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.814319955 podStartE2EDuration="5.103188186s" podCreationTimestamp="2025-11-25 09:01:21 +0000 UTC" firstStartedPulling="2025-11-25 09:01:21.86603791 +0000 UTC m=+3015.575068705" lastFinishedPulling="2025-11-25 09:01:25.154906141 +0000 UTC m=+3018.863936936" observedRunningTime="2025-11-25 09:01:26.095417915 +0000 UTC m=+3019.804448710" watchObservedRunningTime="2025-11-25 09:01:26.103188186 +0000 UTC m=+3019.812218981" Nov 25 09:01:51 crc kubenswrapper[4760]: I1125 09:01:51.394295 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 09:01:58 crc kubenswrapper[4760]: E1125 09:01:58.704508 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:35900->38.129.56.21:33427: write tcp 38.129.56.21:35900->38.129.56.21:33427: write: broken pipe Nov 25 09:02:01 crc kubenswrapper[4760]: E1125 09:02:01.666951 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:56228->38.129.56.21:33427: write tcp 38.129.56.21:56228->38.129.56.21:33427: write: broken pipe Nov 25 09:02:01 crc kubenswrapper[4760]: I1125 09:02:01.746399 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:02:01 crc kubenswrapper[4760]: I1125 09:02:01.746471 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:02:31 crc kubenswrapper[4760]: I1125 09:02:31.746351 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:02:31 crc kubenswrapper[4760]: I1125 09:02:31.747032 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.173535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc"] Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.175658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.220430 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc"] Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.234348 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjmf\" (UniqueName: \"kubernetes.io/projected/fe16fe4f-1740-4d43-a0d2-0d1d649c853c-kube-api-access-lzjmf\") pod \"openstack-operator-controller-operator-7759656c4c-n49xc\" (UID: \"fe16fe4f-1740-4d43-a0d2-0d1d649c853c\") " pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.336324 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzjmf\" (UniqueName: \"kubernetes.io/projected/fe16fe4f-1740-4d43-a0d2-0d1d649c853c-kube-api-access-lzjmf\") pod \"openstack-operator-controller-operator-7759656c4c-n49xc\" (UID: \"fe16fe4f-1740-4d43-a0d2-0d1d649c853c\") " pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.376788 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzjmf\" (UniqueName: \"kubernetes.io/projected/fe16fe4f-1740-4d43-a0d2-0d1d649c853c-kube-api-access-lzjmf\") pod \"openstack-operator-controller-operator-7759656c4c-n49xc\" (UID: \"fe16fe4f-1740-4d43-a0d2-0d1d649c853c\") " pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.500612 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:02:39 crc kubenswrapper[4760]: I1125 09:02:39.979701 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc"] Nov 25 09:02:40 crc kubenswrapper[4760]: I1125 09:02:40.781063 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" event={"ID":"fe16fe4f-1740-4d43-a0d2-0d1d649c853c","Type":"ContainerStarted","Data":"51e93dee920c6e6dc3b6c12a306270bbf1314dd407485389b62cbca7989f0403"} Nov 25 09:02:40 crc kubenswrapper[4760]: I1125 09:02:40.781103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" event={"ID":"fe16fe4f-1740-4d43-a0d2-0d1d649c853c","Type":"ContainerStarted","Data":"6dc535163dac457d4402dabf8a94f8a81b2584b857efd40e06f2e9fd6c04bde0"} Nov 25 09:02:40 crc kubenswrapper[4760]: I1125 09:02:40.781191 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:02:40 crc kubenswrapper[4760]: I1125 09:02:40.817865 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" podStartSLOduration=1.8178439910000002 podStartE2EDuration="1.817843991s" podCreationTimestamp="2025-11-25 09:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:02:40.812980642 +0000 UTC m=+3094.522011437" watchObservedRunningTime="2025-11-25 09:02:40.817843991 +0000 UTC m=+3094.526874786" Nov 25 09:02:49 crc kubenswrapper[4760]: I1125 09:02:49.502556 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:02:49 crc kubenswrapper[4760]: I1125 09:02:49.587546 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms"] Nov 25 09:02:49 crc kubenswrapper[4760]: I1125 09:02:49.587790 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" podUID="2a8a302a-2ee0-4717-9558-74db40b7dfb1" containerName="operator" containerID="cri-o://aa52c6121ea7e90ab8a20330daec615c4f9b8803b2313580748d76965c14b5c7" gracePeriod=10 Nov 25 09:02:49 crc kubenswrapper[4760]: I1125 09:02:49.869814 4760 generic.go:334] "Generic (PLEG): container finished" podID="2a8a302a-2ee0-4717-9558-74db40b7dfb1" containerID="aa52c6121ea7e90ab8a20330daec615c4f9b8803b2313580748d76965c14b5c7" exitCode=0 Nov 25 09:02:49 crc kubenswrapper[4760]: I1125 09:02:49.869868 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" event={"ID":"2a8a302a-2ee0-4717-9558-74db40b7dfb1","Type":"ContainerDied","Data":"aa52c6121ea7e90ab8a20330daec615c4f9b8803b2313580748d76965c14b5c7"} Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.076955 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.142269 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzcnw\" (UniqueName: \"kubernetes.io/projected/2a8a302a-2ee0-4717-9558-74db40b7dfb1-kube-api-access-pzcnw\") pod \"2a8a302a-2ee0-4717-9558-74db40b7dfb1\" (UID: \"2a8a302a-2ee0-4717-9558-74db40b7dfb1\") " Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.149714 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8a302a-2ee0-4717-9558-74db40b7dfb1-kube-api-access-pzcnw" (OuterVolumeSpecName: "kube-api-access-pzcnw") pod "2a8a302a-2ee0-4717-9558-74db40b7dfb1" (UID: "2a8a302a-2ee0-4717-9558-74db40b7dfb1"). InnerVolumeSpecName "kube-api-access-pzcnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.245680 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzcnw\" (UniqueName: \"kubernetes.io/projected/2a8a302a-2ee0-4717-9558-74db40b7dfb1-kube-api-access-pzcnw\") on node \"crc\" DevicePath \"\"" Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.881331 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" event={"ID":"2a8a302a-2ee0-4717-9558-74db40b7dfb1","Type":"ContainerDied","Data":"e7391f3e3ecedffbda3fdc90b941dbbad3fb1ed16c0103853c79b52a61284126"} Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.881388 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms" Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.881417 4760 scope.go:117] "RemoveContainer" containerID="aa52c6121ea7e90ab8a20330daec615c4f9b8803b2313580748d76965c14b5c7" Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.922702 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms"] Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.930373 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-4z6ms"] Nov 25 09:02:50 crc kubenswrapper[4760]: I1125 09:02:50.950835 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8a302a-2ee0-4717-9558-74db40b7dfb1" path="/var/lib/kubelet/pods/2a8a302a-2ee0-4717-9558-74db40b7dfb1/volumes" Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.746841 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.747449 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.747503 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.748359 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.748425 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" gracePeriod=600 Nov 25 09:03:01 crc kubenswrapper[4760]: E1125 09:03:01.870476 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.984085 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" exitCode=0 Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.984125 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0"} Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.984162 4760 scope.go:117] "RemoveContainer" containerID="13e9ce4d6ea90c9d403df75bea2e9a8044a9729da91e45cf4a3c2a094df970e2" Nov 25 09:03:01 crc kubenswrapper[4760]: I1125 09:03:01.984894 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:03:01 crc kubenswrapper[4760]: E1125 09:03:01.985200 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:03:13 crc kubenswrapper[4760]: I1125 09:03:13.938811 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:03:13 crc kubenswrapper[4760]: E1125 09:03:13.939549 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:03:18 crc kubenswrapper[4760]: I1125 09:03:18.852017 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j"] Nov 25 09:03:18 crc kubenswrapper[4760]: E1125 09:03:18.853037 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8a302a-2ee0-4717-9558-74db40b7dfb1" containerName="operator" Nov 25 09:03:18 crc kubenswrapper[4760]: I1125 09:03:18.853052 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8a302a-2ee0-4717-9558-74db40b7dfb1" containerName="operator" Nov 25 09:03:18 crc kubenswrapper[4760]: I1125 09:03:18.853324 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8a302a-2ee0-4717-9558-74db40b7dfb1" containerName="operator" Nov 25 09:03:18 crc kubenswrapper[4760]: I1125 09:03:18.854546 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:03:18 crc kubenswrapper[4760]: I1125 09:03:18.863807 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j"] Nov 25 09:03:18 crc kubenswrapper[4760]: I1125 09:03:18.980897 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5rr\" (UniqueName: \"kubernetes.io/projected/042ed3e8-ea28-44f7-9859-2d0a1d5c3e17-kube-api-access-7j5rr\") pod \"test-operator-controller-manager-8566bc9698-5hw7j\" (UID: \"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17\") " pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:03:19 crc kubenswrapper[4760]: I1125 09:03:19.082695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j5rr\" (UniqueName: \"kubernetes.io/projected/042ed3e8-ea28-44f7-9859-2d0a1d5c3e17-kube-api-access-7j5rr\") pod \"test-operator-controller-manager-8566bc9698-5hw7j\" (UID: \"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17\") " pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:03:19 crc kubenswrapper[4760]: I1125 09:03:19.103127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j5rr\" (UniqueName: \"kubernetes.io/projected/042ed3e8-ea28-44f7-9859-2d0a1d5c3e17-kube-api-access-7j5rr\") pod \"test-operator-controller-manager-8566bc9698-5hw7j\" (UID: \"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17\") " pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:03:19 crc kubenswrapper[4760]: I1125 09:03:19.177680 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:03:19 crc kubenswrapper[4760]: I1125 09:03:19.632606 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j"] Nov 25 09:03:20 crc kubenswrapper[4760]: I1125 09:03:20.144729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" event={"ID":"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17","Type":"ContainerStarted","Data":"0d2b719055fd7722086723393d9b5d4d40af49bba4718f121226e98bcbc46250"} Nov 25 09:03:21 crc kubenswrapper[4760]: I1125 09:03:21.160583 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" event={"ID":"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17","Type":"ContainerStarted","Data":"dc7f5d8a8a611b21c0fc6cb81d7790a3bb78564897cd7e39ae9f85bfacd29c62"} Nov 25 09:03:21 crc kubenswrapper[4760]: I1125 09:03:21.161238 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" event={"ID":"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17","Type":"ContainerStarted","Data":"c3034fbbefe56c0e2a72391bfc1a76e9e30f47ddad367e337c4feaa782852fc1"} Nov 25 09:03:22 crc kubenswrapper[4760]: I1125 09:03:22.167428 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:03:22 crc kubenswrapper[4760]: I1125 09:03:22.188791 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" podStartSLOduration=3.133824236 podStartE2EDuration="4.188764624s" podCreationTimestamp="2025-11-25 09:03:18 +0000 UTC" firstStartedPulling="2025-11-25 09:03:19.641795409 +0000 UTC m=+3133.350826204" lastFinishedPulling="2025-11-25 09:03:20.696735797 +0000 UTC m=+3134.405766592" observedRunningTime="2025-11-25 09:03:22.182768223 +0000 UTC m=+3135.891799028" watchObservedRunningTime="2025-11-25 09:03:22.188764624 +0000 UTC m=+3135.897795429" Nov 25 09:03:28 crc kubenswrapper[4760]: I1125 09:03:28.938955 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:03:28 crc kubenswrapper[4760]: E1125 09:03:28.939754 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:03:29 crc kubenswrapper[4760]: I1125 09:03:29.181190 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:03:29 crc kubenswrapper[4760]: I1125 09:03:29.234031 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8"] Nov 25 09:03:29 crc kubenswrapper[4760]: I1125 09:03:29.234298 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="manager" containerID="cri-o://f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5" gracePeriod=10 Nov 25 09:03:29 crc kubenswrapper[4760]: I1125 09:03:29.234782 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="kube-rbac-proxy" containerID="cri-o://4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5" gracePeriod=10 Nov 25 09:03:29 crc kubenswrapper[4760]: I1125 09:03:29.755190 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 09:03:29 crc kubenswrapper[4760]: I1125 09:03:29.899445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzggd\" (UniqueName: \"kubernetes.io/projected/025ea53a-75d6-443a-965c-83ee12e37737-kube-api-access-xzggd\") pod \"025ea53a-75d6-443a-965c-83ee12e37737\" (UID: \"025ea53a-75d6-443a-965c-83ee12e37737\") " Nov 25 09:03:29 crc kubenswrapper[4760]: I1125 09:03:29.905431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025ea53a-75d6-443a-965c-83ee12e37737-kube-api-access-xzggd" (OuterVolumeSpecName: "kube-api-access-xzggd") pod "025ea53a-75d6-443a-965c-83ee12e37737" (UID: "025ea53a-75d6-443a-965c-83ee12e37737"). InnerVolumeSpecName "kube-api-access-xzggd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.002680 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzggd\" (UniqueName: \"kubernetes.io/projected/025ea53a-75d6-443a-965c-83ee12e37737-kube-api-access-xzggd\") on node \"crc\" DevicePath \"\"" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.239828 4760 generic.go:334] "Generic (PLEG): container finished" podID="025ea53a-75d6-443a-965c-83ee12e37737" containerID="4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5" exitCode=0 Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.240146 4760 generic.go:334] "Generic (PLEG): container finished" podID="025ea53a-75d6-443a-965c-83ee12e37737" containerID="f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5" exitCode=0 Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.240171 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" event={"ID":"025ea53a-75d6-443a-965c-83ee12e37737","Type":"ContainerDied","Data":"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5"} Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.240198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" event={"ID":"025ea53a-75d6-443a-965c-83ee12e37737","Type":"ContainerDied","Data":"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5"} Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.240207 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" event={"ID":"025ea53a-75d6-443a-965c-83ee12e37737","Type":"ContainerDied","Data":"81016787006dbf03c90e8deab574132617c9f8561dc1663c0df8fec5063f831f"} Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.240222 4760 scope.go:117] "RemoveContainer" containerID="4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.240399 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.271504 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8"] Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.276082 4760 scope.go:117] "RemoveContainer" containerID="f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.280093 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-zhdg8"] Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.298603 4760 scope.go:117] "RemoveContainer" containerID="4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5" Nov 25 09:03:30 crc kubenswrapper[4760]: E1125 09:03:30.299022 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5\": container with ID starting with 4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5 not found: ID does not exist" containerID="4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.299084 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5"} err="failed to get container status \"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5\": rpc error: code = NotFound desc = could not find container \"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5\": container with ID starting with 4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5 not found: ID does not exist" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.299105 4760 scope.go:117] "RemoveContainer" containerID="f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5" Nov 25 09:03:30 crc kubenswrapper[4760]: E1125 09:03:30.299474 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5\": container with ID starting with f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5 not found: ID does not exist" containerID="f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.299496 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5"} err="failed to get container status \"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5\": rpc error: code = NotFound desc = could not find container \"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5\": container with ID starting with f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5 not found: ID does not exist" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.299512 4760 scope.go:117] "RemoveContainer" containerID="4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.299970 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5"} err="failed to get container status \"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5\": rpc error: code = NotFound desc = could not find container \"4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5\": container with ID starting with 4ec6c758d217335b4c677eaecaadc3944a4dadcf35c42910811d8ad395f42ce5 not found: ID does not exist" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.300008 4760 scope.go:117] "RemoveContainer" containerID="f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.301846 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5"} err="failed to get container status \"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5\": rpc error: code = NotFound desc = could not find container \"f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5\": container with ID starting with f973dd843c7f4d754d4258c65a71a9a468ced4de1eaee7d8636b4b3b33f31de5 not found: ID does not exist" Nov 25 09:03:30 crc kubenswrapper[4760]: I1125 09:03:30.951012 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025ea53a-75d6-443a-965c-83ee12e37737" path="/var/lib/kubelet/pods/025ea53a-75d6-443a-965c-83ee12e37737/volumes" Nov 25 09:03:41 crc kubenswrapper[4760]: I1125 09:03:41.938792 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:03:41 crc kubenswrapper[4760]: E1125 09:03:41.939643 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:03:54 crc kubenswrapper[4760]: I1125 09:03:54.938326 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:03:54 crc kubenswrapper[4760]: E1125 09:03:54.939175 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.710020 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bjblx"] Nov 25 09:04:06 crc kubenswrapper[4760]: E1125 09:04:06.711150 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="kube-rbac-proxy" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.711167 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="kube-rbac-proxy" Nov 25 09:04:06 crc kubenswrapper[4760]: E1125 09:04:06.711189 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="manager" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.711197 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="manager" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.711475 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="manager" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.711491 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="025ea53a-75d6-443a-965c-83ee12e37737" containerName="kube-rbac-proxy" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.713166 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.724981 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bjblx"] Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.745398 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a85cc6-c1dd-4791-a1c5-d6853d955877-catalog-content\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.746450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a85cc6-c1dd-4791-a1c5-d6853d955877-utilities\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.746696 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjbqc\" (UniqueName: \"kubernetes.io/projected/35a85cc6-c1dd-4791-a1c5-d6853d955877-kube-api-access-jjbqc\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.848735 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a85cc6-c1dd-4791-a1c5-d6853d955877-catalog-content\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.848900 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a85cc6-c1dd-4791-a1c5-d6853d955877-utilities\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.848979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjbqc\" (UniqueName: \"kubernetes.io/projected/35a85cc6-c1dd-4791-a1c5-d6853d955877-kube-api-access-jjbqc\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.849315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a85cc6-c1dd-4791-a1c5-d6853d955877-catalog-content\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.849564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a85cc6-c1dd-4791-a1c5-d6853d955877-utilities\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:06 crc kubenswrapper[4760]: I1125 09:04:06.873275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjbqc\" (UniqueName: \"kubernetes.io/projected/35a85cc6-c1dd-4791-a1c5-d6853d955877-kube-api-access-jjbqc\") pod \"certified-operators-bjblx\" (UID: \"35a85cc6-c1dd-4791-a1c5-d6853d955877\") " pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:07 crc kubenswrapper[4760]: I1125 09:04:07.039486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:07 crc kubenswrapper[4760]: I1125 09:04:07.541013 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bjblx"] Nov 25 09:04:07 crc kubenswrapper[4760]: I1125 09:04:07.606232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjblx" event={"ID":"35a85cc6-c1dd-4791-a1c5-d6853d955877","Type":"ContainerStarted","Data":"33c8b53b3b23348fa3d0c5608518e82280483b9e259cde3ab25ca906844ce76a"} Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.513044 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f25fn"] Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.515364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.524577 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f25fn"] Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.615214 4760 generic.go:334] "Generic (PLEG): container finished" podID="35a85cc6-c1dd-4791-a1c5-d6853d955877" containerID="59ff976b58dfef46dd8abc34798d615033b41403dd54a4e1ad87c9086af63870" exitCode=0 Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.615279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjblx" event={"ID":"35a85cc6-c1dd-4791-a1c5-d6853d955877","Type":"ContainerDied","Data":"59ff976b58dfef46dd8abc34798d615033b41403dd54a4e1ad87c9086af63870"} Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.686147 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-utilities\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.686199 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz6qk\" (UniqueName: \"kubernetes.io/projected/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-kube-api-access-lz6qk\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.687624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-catalog-content\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.789828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-catalog-content\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.789950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-utilities\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.789983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz6qk\" (UniqueName: \"kubernetes.io/projected/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-kube-api-access-lz6qk\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.790422 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-catalog-content\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.790517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-utilities\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.812051 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz6qk\" (UniqueName: \"kubernetes.io/projected/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-kube-api-access-lz6qk\") pod \"redhat-operators-f25fn\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.846791 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:08 crc kubenswrapper[4760]: I1125 09:04:08.939105 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:04:08 crc kubenswrapper[4760]: E1125 09:04:08.939728 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:04:09 crc kubenswrapper[4760]: I1125 09:04:09.343545 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f25fn"] Nov 25 09:04:09 crc kubenswrapper[4760]: I1125 09:04:09.628820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerStarted","Data":"a8a997f9a6403cc56f718b2a815c7bb2677630a23c3dbb2c3ec80b63933a6e2b"} Nov 25 09:04:09 crc kubenswrapper[4760]: I1125 09:04:09.629127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerStarted","Data":"07c715ff9f27a1452c3a4563f25fb0c21728c8e1e10ff05a7d718d05e853903a"} Nov 25 09:04:10 crc kubenswrapper[4760]: I1125 09:04:10.644469 4760 generic.go:334] "Generic (PLEG): container finished" podID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerID="a8a997f9a6403cc56f718b2a815c7bb2677630a23c3dbb2c3ec80b63933a6e2b" exitCode=0 Nov 25 09:04:10 crc kubenswrapper[4760]: I1125 09:04:10.644580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerDied","Data":"a8a997f9a6403cc56f718b2a815c7bb2677630a23c3dbb2c3ec80b63933a6e2b"} Nov 25 09:04:17 crc kubenswrapper[4760]: I1125 09:04:17.742514 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjblx" event={"ID":"35a85cc6-c1dd-4791-a1c5-d6853d955877","Type":"ContainerStarted","Data":"ceaba9c89f6f13b0cba91f7f48bb9a26497396f4589f601c1bb3ff8ce785affa"} Nov 25 09:04:17 crc kubenswrapper[4760]: I1125 09:04:17.744742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerStarted","Data":"965cab0d1792f5bd484f3390c984e18185a4f8dbd3de5de0265531f67ba0b1a0"} Nov 25 09:04:18 crc kubenswrapper[4760]: I1125 09:04:18.761399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjblx" event={"ID":"35a85cc6-c1dd-4791-a1c5-d6853d955877","Type":"ContainerDied","Data":"ceaba9c89f6f13b0cba91f7f48bb9a26497396f4589f601c1bb3ff8ce785affa"} Nov 25 09:04:18 crc kubenswrapper[4760]: I1125 09:04:18.761319 4760 generic.go:334] "Generic (PLEG): container finished" podID="35a85cc6-c1dd-4791-a1c5-d6853d955877" containerID="ceaba9c89f6f13b0cba91f7f48bb9a26497396f4589f601c1bb3ff8ce785affa" exitCode=0 Nov 25 09:04:18 crc kubenswrapper[4760]: I1125 09:04:18.770501 4760 generic.go:334] "Generic (PLEG): container finished" podID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerID="965cab0d1792f5bd484f3390c984e18185a4f8dbd3de5de0265531f67ba0b1a0" exitCode=0 Nov 25 09:04:18 crc kubenswrapper[4760]: I1125 09:04:18.770549 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerDied","Data":"965cab0d1792f5bd484f3390c984e18185a4f8dbd3de5de0265531f67ba0b1a0"} Nov 25 09:04:23 crc kubenswrapper[4760]: I1125 09:04:23.938622 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:04:23 crc kubenswrapper[4760]: E1125 09:04:23.939337 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:04:31 crc kubenswrapper[4760]: I1125 09:04:31.907566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bjblx" event={"ID":"35a85cc6-c1dd-4791-a1c5-d6853d955877","Type":"ContainerStarted","Data":"05ac18d251682138f4620b1d8a730a3de45edfec276bc3d1404cb21ea25731b4"} Nov 25 09:04:31 crc kubenswrapper[4760]: I1125 09:04:31.912326 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerStarted","Data":"6d6ad2a6b7e64b63de60dd754f92f35cb1b1bb85c42499851d123d53b760ed78"} Nov 25 09:04:31 crc kubenswrapper[4760]: I1125 09:04:31.939499 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bjblx" podStartSLOduration=3.912404497 podStartE2EDuration="25.939472832s" podCreationTimestamp="2025-11-25 09:04:06 +0000 UTC" firstStartedPulling="2025-11-25 09:04:08.617319143 +0000 UTC m=+3182.326349938" lastFinishedPulling="2025-11-25 09:04:30.644387468 +0000 UTC m=+3204.353418273" observedRunningTime="2025-11-25 09:04:31.928392454 +0000 UTC m=+3205.637423249" watchObservedRunningTime="2025-11-25 09:04:31.939472832 +0000 UTC m=+3205.648503627" Nov 25 09:04:36 crc kubenswrapper[4760]: I1125 09:04:36.947328 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:04:36 crc kubenswrapper[4760]: E1125 09:04:36.948344 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:04:37 crc kubenswrapper[4760]: I1125 09:04:37.040625 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:37 crc kubenswrapper[4760]: I1125 09:04:37.040765 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:37 crc kubenswrapper[4760]: I1125 09:04:37.089099 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:37 crc kubenswrapper[4760]: I1125 09:04:37.109170 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f25fn" podStartSLOduration=8.528256771 podStartE2EDuration="29.109148119s" podCreationTimestamp="2025-11-25 09:04:08 +0000 UTC" firstStartedPulling="2025-11-25 09:04:10.648824881 +0000 UTC m=+3184.357855676" lastFinishedPulling="2025-11-25 09:04:31.229716229 +0000 UTC m=+3204.938747024" observedRunningTime="2025-11-25 09:04:31.953328848 +0000 UTC m=+3205.662359653" watchObservedRunningTime="2025-11-25 09:04:37.109148119 +0000 UTC m=+3210.818178914" Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.022645 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bjblx" Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.092862 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bjblx"] Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.144337 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chml5"] Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.144807 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chml5" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="registry-server" containerID="cri-o://9ab2c47fd9e64da1d5984e2d3a93d33df5d5e68de70a9d5b6d9b1bf909e7d0f7" gracePeriod=2 Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.846862 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.847055 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.899475 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.987744 4760 generic.go:334] "Generic (PLEG): container finished" podID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerID="9ab2c47fd9e64da1d5984e2d3a93d33df5d5e68de70a9d5b6d9b1bf909e7d0f7" exitCode=0 Nov 25 09:04:38 crc kubenswrapper[4760]: I1125 09:04:38.987807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chml5" event={"ID":"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0","Type":"ContainerDied","Data":"9ab2c47fd9e64da1d5984e2d3a93d33df5d5e68de70a9d5b6d9b1bf909e7d0f7"} Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.047800 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.378725 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chml5" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.529757 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxxbn\" (UniqueName: \"kubernetes.io/projected/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-kube-api-access-xxxbn\") pod \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.530022 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-catalog-content\") pod \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.530097 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-utilities\") pod \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\" (UID: \"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0\") " Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.530987 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-utilities" (OuterVolumeSpecName: "utilities") pod "36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" (UID: "36bfebb6-11e8-4a9d-9bb2-490ae4405cd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.545496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-kube-api-access-xxxbn" (OuterVolumeSpecName: "kube-api-access-xxxbn") pod "36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" (UID: "36bfebb6-11e8-4a9d-9bb2-490ae4405cd0"). InnerVolumeSpecName "kube-api-access-xxxbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.585035 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" (UID: "36bfebb6-11e8-4a9d-9bb2-490ae4405cd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.632078 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.632111 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.632124 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxxbn\" (UniqueName: \"kubernetes.io/projected/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0-kube-api-access-xxxbn\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.999605 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chml5" event={"ID":"36bfebb6-11e8-4a9d-9bb2-490ae4405cd0","Type":"ContainerDied","Data":"677d9f45b96d0f024c9351b41186d1e444105e69b8aa138b0e4463cb9fa4ed8c"} Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.999673 4760 scope.go:117] "RemoveContainer" containerID="9ab2c47fd9e64da1d5984e2d3a93d33df5d5e68de70a9d5b6d9b1bf909e7d0f7" Nov 25 09:04:39 crc kubenswrapper[4760]: I1125 09:04:39.999684 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chml5" Nov 25 09:04:40 crc kubenswrapper[4760]: I1125 09:04:40.036422 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chml5"] Nov 25 09:04:40 crc kubenswrapper[4760]: I1125 09:04:40.045219 4760 scope.go:117] "RemoveContainer" containerID="5d6b822026d2709b772adce245c31d38fcf4f66bd45c7ece11a5cea2d576058f" Nov 25 09:04:40 crc kubenswrapper[4760]: I1125 09:04:40.046220 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chml5"] Nov 25 09:04:40 crc kubenswrapper[4760]: I1125 09:04:40.082167 4760 scope.go:117] "RemoveContainer" containerID="57b74452faade2c3c36321c26262dc91d7e29e740fd3ca9ae89ad845e8965e1a" Nov 25 09:04:40 crc kubenswrapper[4760]: I1125 09:04:40.713668 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f25fn"] Nov 25 09:04:40 crc kubenswrapper[4760]: I1125 09:04:40.950498 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" path="/var/lib/kubelet/pods/36bfebb6-11e8-4a9d-9bb2-490ae4405cd0/volumes" Nov 25 09:04:41 crc kubenswrapper[4760]: I1125 09:04:41.008503 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f25fn" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="registry-server" containerID="cri-o://6d6ad2a6b7e64b63de60dd754f92f35cb1b1bb85c42499851d123d53b760ed78" gracePeriod=2 Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.020918 4760 generic.go:334] "Generic (PLEG): container finished" podID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerID="6d6ad2a6b7e64b63de60dd754f92f35cb1b1bb85c42499851d123d53b760ed78" exitCode=0 Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.020992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerDied","Data":"6d6ad2a6b7e64b63de60dd754f92f35cb1b1bb85c42499851d123d53b760ed78"} Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.140276 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.283976 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-utilities\") pod \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.284189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz6qk\" (UniqueName: \"kubernetes.io/projected/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-kube-api-access-lz6qk\") pod \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.284276 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-catalog-content\") pod \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\" (UID: \"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a\") " Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.284985 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-utilities" (OuterVolumeSpecName: "utilities") pod "85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" (UID: "85de17ba-b5fa-4570-a75e-2f0a0bbbf64a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.290068 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-kube-api-access-lz6qk" (OuterVolumeSpecName: "kube-api-access-lz6qk") pod "85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" (UID: "85de17ba-b5fa-4570-a75e-2f0a0bbbf64a"). InnerVolumeSpecName "kube-api-access-lz6qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.383196 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" (UID: "85de17ba-b5fa-4570-a75e-2f0a0bbbf64a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.386425 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz6qk\" (UniqueName: \"kubernetes.io/projected/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-kube-api-access-lz6qk\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.386475 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:42 crc kubenswrapper[4760]: I1125 09:04:42.386489 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:43 crc kubenswrapper[4760]: I1125 09:04:43.038738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f25fn" event={"ID":"85de17ba-b5fa-4570-a75e-2f0a0bbbf64a","Type":"ContainerDied","Data":"07c715ff9f27a1452c3a4563f25fb0c21728c8e1e10ff05a7d718d05e853903a"} Nov 25 09:04:43 crc kubenswrapper[4760]: I1125 09:04:43.039083 4760 scope.go:117] "RemoveContainer" containerID="6d6ad2a6b7e64b63de60dd754f92f35cb1b1bb85c42499851d123d53b760ed78" Nov 25 09:04:43 crc kubenswrapper[4760]: I1125 09:04:43.038820 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f25fn" Nov 25 09:04:43 crc kubenswrapper[4760]: I1125 09:04:43.067230 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f25fn"] Nov 25 09:04:43 crc kubenswrapper[4760]: I1125 09:04:43.069012 4760 scope.go:117] "RemoveContainer" containerID="965cab0d1792f5bd484f3390c984e18185a4f8dbd3de5de0265531f67ba0b1a0" Nov 25 09:04:43 crc kubenswrapper[4760]: I1125 09:04:43.074122 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f25fn"] Nov 25 09:04:43 crc kubenswrapper[4760]: I1125 09:04:43.091950 4760 scope.go:117] "RemoveContainer" containerID="a8a997f9a6403cc56f718b2a815c7bb2677630a23c3dbb2c3ec80b63933a6e2b" Nov 25 09:04:44 crc kubenswrapper[4760]: I1125 09:04:44.949329 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" path="/var/lib/kubelet/pods/85de17ba-b5fa-4570-a75e-2f0a0bbbf64a/volumes" Nov 25 09:04:50 crc kubenswrapper[4760]: I1125 09:04:50.940025 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:04:50 crc kubenswrapper[4760]: E1125 09:04:50.940994 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:05:01 crc kubenswrapper[4760]: I1125 09:05:01.939021 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:05:01 crc kubenswrapper[4760]: E1125 09:05:01.939931 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:05:12 crc kubenswrapper[4760]: I1125 09:05:12.938647 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:05:12 crc kubenswrapper[4760]: E1125 09:05:12.939553 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:05:23 crc kubenswrapper[4760]: I1125 09:05:23.938844 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:05:23 crc kubenswrapper[4760]: E1125 09:05:23.939714 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:05:36 crc kubenswrapper[4760]: I1125 09:05:36.948513 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:05:36 crc kubenswrapper[4760]: E1125 09:05:36.950178 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.474625 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Nov 25 09:05:40 crc kubenswrapper[4760]: E1125 09:05:40.478018 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="extract-content" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.478783 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="extract-content" Nov 25 09:05:40 crc kubenswrapper[4760]: E1125 09:05:40.478871 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="extract-utilities" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.478882 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="extract-utilities" Nov 25 09:05:40 crc kubenswrapper[4760]: E1125 09:05:40.478914 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="registry-server" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.478933 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="registry-server" Nov 25 09:05:40 crc kubenswrapper[4760]: E1125 09:05:40.478978 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="registry-server" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.478986 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="registry-server" Nov 25 09:05:40 crc kubenswrapper[4760]: E1125 09:05:40.478997 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="extract-content" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.479005 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="extract-content" Nov 25 09:05:40 crc kubenswrapper[4760]: E1125 09:05:40.479022 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="extract-utilities" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.479029 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="extract-utilities" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.479521 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="85de17ba-b5fa-4570-a75e-2f0a0bbbf64a" containerName="registry-server" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.479540 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bfebb6-11e8-4a9d-9bb2-490ae4405cd0" containerName="registry-server" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.480380 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.485016 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.485132 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.485374 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.485833 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gq598" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.486367 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.605938 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldvg\" (UniqueName: \"kubernetes.io/projected/a546f694-04d6-4212-b53a-142420418b97-kube-api-access-sldvg\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606284 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606313 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606348 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606409 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.606485 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.708652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.708811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.709868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.709951 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710287 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710428 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710599 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sldvg\" (UniqueName: \"kubernetes.io/projected/a546f694-04d6-4212-b53a-142420418b97-kube-api-access-sldvg\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710650 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710717 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.710770 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.712518 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.712846 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.714858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.715665 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.715683 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.715793 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.726042 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldvg\" (UniqueName: \"kubernetes.io/projected/a546f694-04d6-4212-b53a-142420418b97-kube-api-access-sldvg\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.750308 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:40 crc kubenswrapper[4760]: I1125 09:05:40.814814 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:05:41 crc kubenswrapper[4760]: I1125 09:05:41.330196 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Nov 25 09:05:41 crc kubenswrapper[4760]: I1125 09:05:41.781822 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"a546f694-04d6-4212-b53a-142420418b97","Type":"ContainerStarted","Data":"3c65650755f2c41dbaed8c026d0df0453690cf3c837c43e6e51828991be45cde"} Nov 25 09:05:50 crc kubenswrapper[4760]: I1125 09:05:50.938138 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:05:50 crc kubenswrapper[4760]: E1125 09:05:50.938892 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.312557 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4dfbr"] Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.316155 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.327961 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dfbr"] Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.503195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-utilities\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.503336 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-catalog-content\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.503386 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwxd\" (UniqueName: \"kubernetes.io/projected/a1c65e75-fcb5-4834-b698-7f185702fdb8-kube-api-access-jtwxd\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.605589 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-utilities\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.605757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-catalog-content\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.605814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtwxd\" (UniqueName: \"kubernetes.io/projected/a1c65e75-fcb5-4834-b698-7f185702fdb8-kube-api-access-jtwxd\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.606134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-utilities\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.606817 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-catalog-content\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.629832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtwxd\" (UniqueName: \"kubernetes.io/projected/a1c65e75-fcb5-4834-b698-7f185702fdb8-kube-api-access-jtwxd\") pod \"community-operators-4dfbr\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:02 crc kubenswrapper[4760]: I1125 09:06:02.646891 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:04 crc kubenswrapper[4760]: I1125 09:06:04.938407 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:06:04 crc kubenswrapper[4760]: E1125 09:06:04.939361 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:06:18 crc kubenswrapper[4760]: I1125 09:06:18.939350 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:06:18 crc kubenswrapper[4760]: E1125 09:06:18.940262 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:06:29 crc kubenswrapper[4760]: E1125 09:06:29.275189 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 25 09:06:29 crc kubenswrapper[4760]: E1125 09:06:29.275723 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sldvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(a546f694-04d6-4212-b53a-142420418b97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 09:06:29 crc kubenswrapper[4760]: E1125 09:06:29.276935 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="a546f694-04d6-4212-b53a-142420418b97" Nov 25 09:06:29 crc kubenswrapper[4760]: E1125 09:06:29.366113 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="a546f694-04d6-4212-b53a-142420418b97" Nov 25 09:06:29 crc kubenswrapper[4760]: I1125 09:06:29.663857 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dfbr"] Nov 25 09:06:30 crc kubenswrapper[4760]: I1125 09:06:30.377002 4760 generic.go:334] "Generic (PLEG): container finished" podID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerID="a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff" exitCode=0 Nov 25 09:06:30 crc kubenswrapper[4760]: I1125 09:06:30.377109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dfbr" event={"ID":"a1c65e75-fcb5-4834-b698-7f185702fdb8","Type":"ContainerDied","Data":"a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff"} Nov 25 09:06:30 crc kubenswrapper[4760]: I1125 09:06:30.377450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dfbr" event={"ID":"a1c65e75-fcb5-4834-b698-7f185702fdb8","Type":"ContainerStarted","Data":"80f3e55a43057886c88ef911572195da9b37db82d674df828290a6572d98cd9e"} Nov 25 09:06:30 crc kubenswrapper[4760]: I1125 09:06:30.380039 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:06:30 crc kubenswrapper[4760]: I1125 09:06:30.944260 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:06:30 crc kubenswrapper[4760]: E1125 09:06:30.949466 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:06:31 crc kubenswrapper[4760]: I1125 09:06:31.389494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dfbr" event={"ID":"a1c65e75-fcb5-4834-b698-7f185702fdb8","Type":"ContainerStarted","Data":"183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b"} Nov 25 09:06:33 crc kubenswrapper[4760]: I1125 09:06:33.409224 4760 generic.go:334] "Generic (PLEG): container finished" podID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerID="183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b" exitCode=0 Nov 25 09:06:33 crc kubenswrapper[4760]: I1125 09:06:33.409288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dfbr" event={"ID":"a1c65e75-fcb5-4834-b698-7f185702fdb8","Type":"ContainerDied","Data":"183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b"} Nov 25 09:06:34 crc kubenswrapper[4760]: I1125 09:06:34.420091 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dfbr" event={"ID":"a1c65e75-fcb5-4834-b698-7f185702fdb8","Type":"ContainerStarted","Data":"73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb"} Nov 25 09:06:34 crc kubenswrapper[4760]: I1125 09:06:34.448005 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4dfbr" podStartSLOduration=29.025777687 podStartE2EDuration="32.447980954s" podCreationTimestamp="2025-11-25 09:06:02 +0000 UTC" firstStartedPulling="2025-11-25 09:06:30.37981672 +0000 UTC m=+3324.088847515" lastFinishedPulling="2025-11-25 09:06:33.802019997 +0000 UTC m=+3327.511050782" observedRunningTime="2025-11-25 09:06:34.439106 +0000 UTC m=+3328.148136815" watchObservedRunningTime="2025-11-25 09:06:34.447980954 +0000 UTC m=+3328.157011749" Nov 25 09:06:40 crc kubenswrapper[4760]: I1125 09:06:40.403551 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 09:06:41 crc kubenswrapper[4760]: I1125 09:06:41.533435 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"a546f694-04d6-4212-b53a-142420418b97","Type":"ContainerStarted","Data":"f797854b9d0fd441f309ce5569e4de336d7f922b9a1571fe41efdef9165774a7"} Nov 25 09:06:41 crc kubenswrapper[4760]: I1125 09:06:41.559513 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=3.491842802 podStartE2EDuration="1m2.559494972s" podCreationTimestamp="2025-11-25 09:05:39 +0000 UTC" firstStartedPulling="2025-11-25 09:05:41.333028689 +0000 UTC m=+3275.042059484" lastFinishedPulling="2025-11-25 09:06:40.400680849 +0000 UTC m=+3334.109711654" observedRunningTime="2025-11-25 09:06:41.555309343 +0000 UTC m=+3335.264340158" watchObservedRunningTime="2025-11-25 09:06:41.559494972 +0000 UTC m=+3335.268525767" Nov 25 09:06:42 crc kubenswrapper[4760]: I1125 09:06:42.647582 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:42 crc kubenswrapper[4760]: I1125 09:06:42.648146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:42 crc kubenswrapper[4760]: I1125 09:06:42.714865 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:43 crc kubenswrapper[4760]: I1125 09:06:43.612105 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:43 crc kubenswrapper[4760]: I1125 09:06:43.660194 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dfbr"] Nov 25 09:06:44 crc kubenswrapper[4760]: I1125 09:06:44.938320 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:06:44 crc kubenswrapper[4760]: E1125 09:06:44.938981 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:06:45 crc kubenswrapper[4760]: I1125 09:06:45.568511 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4dfbr" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="registry-server" containerID="cri-o://73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb" gracePeriod=2 Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.052100 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.163767 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-utilities\") pod \"a1c65e75-fcb5-4834-b698-7f185702fdb8\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.163846 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-catalog-content\") pod \"a1c65e75-fcb5-4834-b698-7f185702fdb8\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.163931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtwxd\" (UniqueName: \"kubernetes.io/projected/a1c65e75-fcb5-4834-b698-7f185702fdb8-kube-api-access-jtwxd\") pod \"a1c65e75-fcb5-4834-b698-7f185702fdb8\" (UID: \"a1c65e75-fcb5-4834-b698-7f185702fdb8\") " Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.165046 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-utilities" (OuterVolumeSpecName: "utilities") pod "a1c65e75-fcb5-4834-b698-7f185702fdb8" (UID: "a1c65e75-fcb5-4834-b698-7f185702fdb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.171800 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c65e75-fcb5-4834-b698-7f185702fdb8-kube-api-access-jtwxd" (OuterVolumeSpecName: "kube-api-access-jtwxd") pod "a1c65e75-fcb5-4834-b698-7f185702fdb8" (UID: "a1c65e75-fcb5-4834-b698-7f185702fdb8"). InnerVolumeSpecName "kube-api-access-jtwxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.212347 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1c65e75-fcb5-4834-b698-7f185702fdb8" (UID: "a1c65e75-fcb5-4834-b698-7f185702fdb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.266882 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.266916 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c65e75-fcb5-4834-b698-7f185702fdb8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.266928 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtwxd\" (UniqueName: \"kubernetes.io/projected/a1c65e75-fcb5-4834-b698-7f185702fdb8-kube-api-access-jtwxd\") on node \"crc\" DevicePath \"\"" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.580591 4760 generic.go:334] "Generic (PLEG): container finished" podID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerID="73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb" exitCode=0 Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.580633 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dfbr" event={"ID":"a1c65e75-fcb5-4834-b698-7f185702fdb8","Type":"ContainerDied","Data":"73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb"} Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.580657 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dfbr" event={"ID":"a1c65e75-fcb5-4834-b698-7f185702fdb8","Type":"ContainerDied","Data":"80f3e55a43057886c88ef911572195da9b37db82d674df828290a6572d98cd9e"} Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.580675 4760 scope.go:117] "RemoveContainer" containerID="73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.581045 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dfbr" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.608767 4760 scope.go:117] "RemoveContainer" containerID="183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.628469 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dfbr"] Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.636925 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4dfbr"] Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.660825 4760 scope.go:117] "RemoveContainer" containerID="a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.701464 4760 scope.go:117] "RemoveContainer" containerID="73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb" Nov 25 09:06:46 crc kubenswrapper[4760]: E1125 09:06:46.702054 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb\": container with ID starting with 73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb not found: ID does not exist" containerID="73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.702105 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb"} err="failed to get container status \"73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb\": rpc error: code = NotFound desc = could not find container \"73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb\": container with ID starting with 73a51cd17ba68dfdb0d203f62dee37f2a48e7c5b196e1cc3859cf08312cf63bb not found: ID does not exist" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.702140 4760 scope.go:117] "RemoveContainer" containerID="183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b" Nov 25 09:06:46 crc kubenswrapper[4760]: E1125 09:06:46.702839 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b\": container with ID starting with 183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b not found: ID does not exist" containerID="183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.702869 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b"} err="failed to get container status \"183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b\": rpc error: code = NotFound desc = could not find container \"183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b\": container with ID starting with 183e00f61fb3a7750c4e9aa07fe52357a49343417dbdff1dc6568cfd5fc5899b not found: ID does not exist" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.702887 4760 scope.go:117] "RemoveContainer" containerID="a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff" Nov 25 09:06:46 crc kubenswrapper[4760]: E1125 09:06:46.703615 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff\": container with ID starting with a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff not found: ID does not exist" containerID="a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.703673 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff"} err="failed to get container status \"a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff\": rpc error: code = NotFound desc = could not find container \"a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff\": container with ID starting with a119c897d078f960de35e370dca6d0324dd6882fa6deec39a4c2980499042eff not found: ID does not exist" Nov 25 09:06:46 crc kubenswrapper[4760]: I1125 09:06:46.954968 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" path="/var/lib/kubelet/pods/a1c65e75-fcb5-4834-b698-7f185702fdb8/volumes" Nov 25 09:06:58 crc kubenswrapper[4760]: I1125 09:06:58.939302 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:06:58 crc kubenswrapper[4760]: E1125 09:06:58.940157 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:07:13 crc kubenswrapper[4760]: I1125 09:07:13.938971 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:07:13 crc kubenswrapper[4760]: E1125 09:07:13.939829 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:07:28 crc kubenswrapper[4760]: I1125 09:07:28.939061 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:07:28 crc kubenswrapper[4760]: E1125 09:07:28.941009 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:07:33 crc kubenswrapper[4760]: I1125 09:07:33.792034 4760 scope.go:117] "RemoveContainer" containerID="b7708dac53c629790af655f35b63c60641caf85046634ae7d554880c1863927f" Nov 25 09:07:33 crc kubenswrapper[4760]: I1125 09:07:33.816052 4760 scope.go:117] "RemoveContainer" containerID="0eb7a25e447cd433bcbf6584aa988810c0bdab57265e24d95c556d71b283f8c7" Nov 25 09:07:33 crc kubenswrapper[4760]: I1125 09:07:33.837302 4760 scope.go:117] "RemoveContainer" containerID="333ad11054748d798ec477a1df68b23441a1780a482785d5465525ee13f079a8" Nov 25 09:07:33 crc kubenswrapper[4760]: I1125 09:07:33.891843 4760 scope.go:117] "RemoveContainer" containerID="67f33e898d65ddb339fddb37a852acd85e83d6fde99e2b13ac60b1d5d5440f89" Nov 25 09:07:33 crc kubenswrapper[4760]: I1125 09:07:33.937211 4760 scope.go:117] "RemoveContainer" containerID="fc7cf9bbb30cc77e4abfb6e3271c552fda99889b73c5841e2ecb3abd37c0d623" Nov 25 09:07:33 crc kubenswrapper[4760]: I1125 09:07:33.962895 4760 scope.go:117] "RemoveContainer" containerID="400ae8dd5257f0d13d17d074ec0f3511ca549403634c0cfca5969058bcb578d1" Nov 25 09:07:40 crc kubenswrapper[4760]: I1125 09:07:40.938905 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:07:40 crc kubenswrapper[4760]: E1125 09:07:40.939810 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:07:52 crc kubenswrapper[4760]: I1125 09:07:52.969296 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6qk"] Nov 25 09:07:52 crc kubenswrapper[4760]: E1125 09:07:52.970313 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="extract-utilities" Nov 25 09:07:52 crc kubenswrapper[4760]: I1125 09:07:52.970335 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="extract-utilities" Nov 25 09:07:52 crc kubenswrapper[4760]: E1125 09:07:52.970363 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="registry-server" Nov 25 09:07:52 crc kubenswrapper[4760]: I1125 09:07:52.970377 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="registry-server" Nov 25 09:07:52 crc kubenswrapper[4760]: E1125 09:07:52.970422 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="extract-content" Nov 25 09:07:52 crc kubenswrapper[4760]: I1125 09:07:52.970428 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="extract-content" Nov 25 09:07:52 crc kubenswrapper[4760]: I1125 09:07:52.970625 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c65e75-fcb5-4834-b698-7f185702fdb8" containerName="registry-server" Nov 25 09:07:52 crc kubenswrapper[4760]: I1125 09:07:52.972198 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:52 crc kubenswrapper[4760]: I1125 09:07:52.980819 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6qk"] Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.079852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhft2\" (UniqueName: \"kubernetes.io/projected/565b3453-8066-4d17-bff4-1c1157e1368f-kube-api-access-zhft2\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.079919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-catalog-content\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.079987 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-utilities\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.181663 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhft2\" (UniqueName: \"kubernetes.io/projected/565b3453-8066-4d17-bff4-1c1157e1368f-kube-api-access-zhft2\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.181722 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-catalog-content\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.181764 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-utilities\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.182325 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-catalog-content\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.182608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-utilities\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.203800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhft2\" (UniqueName: \"kubernetes.io/projected/565b3453-8066-4d17-bff4-1c1157e1368f-kube-api-access-zhft2\") pod \"redhat-marketplace-2r6qk\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.297148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:07:53 crc kubenswrapper[4760]: I1125 09:07:53.779946 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6qk"] Nov 25 09:07:54 crc kubenswrapper[4760]: I1125 09:07:54.204454 4760 generic.go:334] "Generic (PLEG): container finished" podID="565b3453-8066-4d17-bff4-1c1157e1368f" containerID="8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd" exitCode=0 Nov 25 09:07:54 crc kubenswrapper[4760]: I1125 09:07:54.204513 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6qk" event={"ID":"565b3453-8066-4d17-bff4-1c1157e1368f","Type":"ContainerDied","Data":"8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd"} Nov 25 09:07:54 crc kubenswrapper[4760]: I1125 09:07:54.204552 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6qk" event={"ID":"565b3453-8066-4d17-bff4-1c1157e1368f","Type":"ContainerStarted","Data":"d45e08f443339d195bfb43f471d56149678ee7cd9acd77f87ee5670b6d361c71"} Nov 25 09:07:55 crc kubenswrapper[4760]: I1125 09:07:55.938215 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:07:55 crc kubenswrapper[4760]: E1125 09:07:55.938988 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:07:56 crc kubenswrapper[4760]: I1125 09:07:56.231860 4760 generic.go:334] "Generic (PLEG): container finished" podID="565b3453-8066-4d17-bff4-1c1157e1368f" containerID="8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1" exitCode=0 Nov 25 09:07:56 crc kubenswrapper[4760]: I1125 09:07:56.231930 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6qk" event={"ID":"565b3453-8066-4d17-bff4-1c1157e1368f","Type":"ContainerDied","Data":"8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1"} Nov 25 09:07:57 crc kubenswrapper[4760]: I1125 09:07:57.243811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6qk" event={"ID":"565b3453-8066-4d17-bff4-1c1157e1368f","Type":"ContainerStarted","Data":"dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f"} Nov 25 09:07:57 crc kubenswrapper[4760]: I1125 09:07:57.268156 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2r6qk" podStartSLOduration=2.798228635 podStartE2EDuration="5.268135871s" podCreationTimestamp="2025-11-25 09:07:52 +0000 UTC" firstStartedPulling="2025-11-25 09:07:54.206613794 +0000 UTC m=+3407.915644589" lastFinishedPulling="2025-11-25 09:07:56.67652103 +0000 UTC m=+3410.385551825" observedRunningTime="2025-11-25 09:07:57.266771752 +0000 UTC m=+3410.975802567" watchObservedRunningTime="2025-11-25 09:07:57.268135871 +0000 UTC m=+3410.977166666" Nov 25 09:08:03 crc kubenswrapper[4760]: I1125 09:08:03.298306 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:08:03 crc kubenswrapper[4760]: I1125 09:08:03.298924 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:08:03 crc kubenswrapper[4760]: I1125 09:08:03.352208 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:08:04 crc kubenswrapper[4760]: I1125 09:08:04.355696 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:08:04 crc kubenswrapper[4760]: I1125 09:08:04.408088 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6qk"] Nov 25 09:08:06 crc kubenswrapper[4760]: I1125 09:08:06.329617 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2r6qk" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="registry-server" containerID="cri-o://dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f" gracePeriod=2 Nov 25 09:08:06 crc kubenswrapper[4760]: I1125 09:08:06.957554 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.065713 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-catalog-content\") pod \"565b3453-8066-4d17-bff4-1c1157e1368f\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.065794 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhft2\" (UniqueName: \"kubernetes.io/projected/565b3453-8066-4d17-bff4-1c1157e1368f-kube-api-access-zhft2\") pod \"565b3453-8066-4d17-bff4-1c1157e1368f\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.065822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-utilities\") pod \"565b3453-8066-4d17-bff4-1c1157e1368f\" (UID: \"565b3453-8066-4d17-bff4-1c1157e1368f\") " Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.066864 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-utilities" (OuterVolumeSpecName: "utilities") pod "565b3453-8066-4d17-bff4-1c1157e1368f" (UID: "565b3453-8066-4d17-bff4-1c1157e1368f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.068409 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.073492 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565b3453-8066-4d17-bff4-1c1157e1368f-kube-api-access-zhft2" (OuterVolumeSpecName: "kube-api-access-zhft2") pod "565b3453-8066-4d17-bff4-1c1157e1368f" (UID: "565b3453-8066-4d17-bff4-1c1157e1368f"). InnerVolumeSpecName "kube-api-access-zhft2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.086839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "565b3453-8066-4d17-bff4-1c1157e1368f" (UID: "565b3453-8066-4d17-bff4-1c1157e1368f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.170542 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhft2\" (UniqueName: \"kubernetes.io/projected/565b3453-8066-4d17-bff4-1c1157e1368f-kube-api-access-zhft2\") on node \"crc\" DevicePath \"\"" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.170587 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/565b3453-8066-4d17-bff4-1c1157e1368f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.347840 4760 generic.go:334] "Generic (PLEG): container finished" podID="565b3453-8066-4d17-bff4-1c1157e1368f" containerID="dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f" exitCode=0 Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.347908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6qk" event={"ID":"565b3453-8066-4d17-bff4-1c1157e1368f","Type":"ContainerDied","Data":"dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f"} Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.347982 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6qk" event={"ID":"565b3453-8066-4d17-bff4-1c1157e1368f","Type":"ContainerDied","Data":"d45e08f443339d195bfb43f471d56149678ee7cd9acd77f87ee5670b6d361c71"} Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.348026 4760 scope.go:117] "RemoveContainer" containerID="dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.349235 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6qk" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.369669 4760 scope.go:117] "RemoveContainer" containerID="8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.391834 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6qk"] Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.408984 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6qk"] Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.413554 4760 scope.go:117] "RemoveContainer" containerID="8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.444304 4760 scope.go:117] "RemoveContainer" containerID="dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f" Nov 25 09:08:07 crc kubenswrapper[4760]: E1125 09:08:07.444988 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f\": container with ID starting with dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f not found: ID does not exist" containerID="dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.445039 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f"} err="failed to get container status \"dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f\": rpc error: code = NotFound desc = could not find container \"dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f\": container with ID starting with dcc5d8ab4af64352df27dd76e058ba85f2535db5d71c6248e2d6832097b7d86f not found: ID does not exist" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.445068 4760 scope.go:117] "RemoveContainer" containerID="8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1" Nov 25 09:08:07 crc kubenswrapper[4760]: E1125 09:08:07.445666 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1\": container with ID starting with 8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1 not found: ID does not exist" containerID="8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.445697 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1"} err="failed to get container status \"8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1\": rpc error: code = NotFound desc = could not find container \"8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1\": container with ID starting with 8dc9a8cc4e3d706b42d27fcddea6e511d364bffe5a25db4329c9776d474537d1 not found: ID does not exist" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.445718 4760 scope.go:117] "RemoveContainer" containerID="8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd" Nov 25 09:08:07 crc kubenswrapper[4760]: E1125 09:08:07.446128 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd\": container with ID starting with 8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd not found: ID does not exist" containerID="8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd" Nov 25 09:08:07 crc kubenswrapper[4760]: I1125 09:08:07.446155 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd"} err="failed to get container status \"8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd\": rpc error: code = NotFound desc = could not find container \"8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd\": container with ID starting with 8d25191633d712a04c23cc434ea607667a0b5911a41d9caafbefcacc57da98cd not found: ID does not exist" Nov 25 09:08:08 crc kubenswrapper[4760]: I1125 09:08:08.939086 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:08:08 crc kubenswrapper[4760]: I1125 09:08:08.948596 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" path="/var/lib/kubelet/pods/565b3453-8066-4d17-bff4-1c1157e1368f/volumes" Nov 25 09:08:09 crc kubenswrapper[4760]: I1125 09:08:09.369189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"95a670d13c42eb3ac6f3e3f1ae28374eb936ec37ccc3d0a7aab18131fbbe2cba"} Nov 25 09:09:49 crc kubenswrapper[4760]: I1125 09:09:49.058571 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-f2bf-account-create-gnm6f"] Nov 25 09:09:49 crc kubenswrapper[4760]: I1125 09:09:49.069651 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-nh6wn"] Nov 25 09:09:49 crc kubenswrapper[4760]: I1125 09:09:49.077725 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-f2bf-account-create-gnm6f"] Nov 25 09:09:49 crc kubenswrapper[4760]: I1125 09:09:49.090114 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-nh6wn"] Nov 25 09:09:50 crc kubenswrapper[4760]: I1125 09:09:50.949220 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d23dc6f-cedb-4acd-9107-f39d6ed0f903" path="/var/lib/kubelet/pods/4d23dc6f-cedb-4acd-9107-f39d6ed0f903/volumes" Nov 25 09:09:50 crc kubenswrapper[4760]: I1125 09:09:50.949789 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f12d4c-8065-4ae2-835e-dd2cd09160a6" path="/var/lib/kubelet/pods/90f12d4c-8065-4ae2-835e-dd2cd09160a6/volumes" Nov 25 09:10:31 crc kubenswrapper[4760]: I1125 09:10:31.046613 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-pqtpz"] Nov 25 09:10:31 crc kubenswrapper[4760]: I1125 09:10:31.055832 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-pqtpz"] Nov 25 09:10:31 crc kubenswrapper[4760]: I1125 09:10:31.746605 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:10:31 crc kubenswrapper[4760]: I1125 09:10:31.746679 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:10:32 crc kubenswrapper[4760]: I1125 09:10:32.956967 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbad6e1-fbdc-43fb-8295-40975fd98c69" path="/var/lib/kubelet/pods/dcbad6e1-fbdc-43fb-8295-40975fd98c69/volumes" Nov 25 09:10:34 crc kubenswrapper[4760]: I1125 09:10:34.114157 4760 scope.go:117] "RemoveContainer" containerID="c392ecd0f3e335726d7bdfe7588957137a3b24844f83da30c849ceac47448fe3" Nov 25 09:10:34 crc kubenswrapper[4760]: I1125 09:10:34.190156 4760 scope.go:117] "RemoveContainer" containerID="407840f261917036f4bf5db662948095e03fe61844c4491274ad88bb777d6122" Nov 25 09:10:34 crc kubenswrapper[4760]: I1125 09:10:34.224956 4760 scope.go:117] "RemoveContainer" containerID="9411ad2f204a8e4667ab5abcc3c500b08a4ad1d8fe7721925e881d59c50f391a" Nov 25 09:11:01 crc kubenswrapper[4760]: I1125 09:11:01.746648 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:11:01 crc kubenswrapper[4760]: I1125 09:11:01.747338 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:11:31 crc kubenswrapper[4760]: I1125 09:11:31.747437 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:11:31 crc kubenswrapper[4760]: I1125 09:11:31.748003 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:11:31 crc kubenswrapper[4760]: I1125 09:11:31.748061 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:11:31 crc kubenswrapper[4760]: I1125 09:11:31.749120 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95a670d13c42eb3ac6f3e3f1ae28374eb936ec37ccc3d0a7aab18131fbbe2cba"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:11:31 crc kubenswrapper[4760]: I1125 09:11:31.749179 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://95a670d13c42eb3ac6f3e3f1ae28374eb936ec37ccc3d0a7aab18131fbbe2cba" gracePeriod=600 Nov 25 09:11:32 crc kubenswrapper[4760]: I1125 09:11:32.263018 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="95a670d13c42eb3ac6f3e3f1ae28374eb936ec37ccc3d0a7aab18131fbbe2cba" exitCode=0 Nov 25 09:11:32 crc kubenswrapper[4760]: I1125 09:11:32.263083 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"95a670d13c42eb3ac6f3e3f1ae28374eb936ec37ccc3d0a7aab18131fbbe2cba"} Nov 25 09:11:32 crc kubenswrapper[4760]: I1125 09:11:32.263472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94"} Nov 25 09:11:32 crc kubenswrapper[4760]: I1125 09:11:32.263502 4760 scope.go:117] "RemoveContainer" containerID="3871ac116074ff065a9d74411b9f33aa438e472eccf12c24ffdf4156c9ddf7f0" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.401211 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 09:11:55 crc kubenswrapper[4760]: E1125 09:11:55.402082 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="registry-server" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.402095 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="registry-server" Nov 25 09:11:55 crc kubenswrapper[4760]: E1125 09:11:55.402111 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="extract-content" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.402116 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="extract-content" Nov 25 09:11:55 crc kubenswrapper[4760]: E1125 09:11:55.402137 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="extract-utilities" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.402145 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="extract-utilities" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.402356 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="565b3453-8066-4d17-bff4-1c1157e1368f" containerName="registry-server" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.423647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.428169 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.428265 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.445405 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.520292 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.520503 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.622659 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.622754 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.622966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.639859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:55 crc kubenswrapper[4760]: I1125 09:11:55.768180 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:56 crc kubenswrapper[4760]: I1125 09:11:56.228503 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 09:11:56 crc kubenswrapper[4760]: I1125 09:11:56.470755 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3","Type":"ContainerStarted","Data":"4e68893d2787c0e766f1d630e268d765238b5bb465eabbe13e97d7f660a1b1ff"} Nov 25 09:11:57 crc kubenswrapper[4760]: I1125 09:11:57.483419 4760 generic.go:334] "Generic (PLEG): container finished" podID="d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3" containerID="1805576c5cd2b84d564f10e99104acaa6a6cf9632d5c2768afd5d10e1b152a00" exitCode=0 Nov 25 09:11:57 crc kubenswrapper[4760]: I1125 09:11:57.484019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3","Type":"ContainerDied","Data":"1805576c5cd2b84d564f10e99104acaa6a6cf9632d5c2768afd5d10e1b152a00"} Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.040234 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.197086 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kube-api-access\") pod \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.197170 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kubelet-dir\") pod \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\" (UID: \"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3\") " Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.197325 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3" (UID: "d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.197758 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.203180 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3" (UID: "d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.299360 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.511699 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3","Type":"ContainerDied","Data":"4e68893d2787c0e766f1d630e268d765238b5bb465eabbe13e97d7f660a1b1ff"} Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.511745 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e68893d2787c0e766f1d630e268d765238b5bb465eabbe13e97d7f660a1b1ff" Nov 25 09:11:59 crc kubenswrapper[4760]: I1125 09:11:59.511807 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.204850 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 09:12:01 crc kubenswrapper[4760]: E1125 09:12:01.206225 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3" containerName="pruner" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.206286 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3" containerName="pruner" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.206870 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01f7f8d-8c2f-4f2d-bd18-5a2a8af633e3" containerName="pruner" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.207767 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.211540 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.211540 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.228803 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.336583 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d019e132-fba9-43bc-80c5-01bb4ac44303-kube-api-access\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.336686 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.336715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-var-lock\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.439236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d019e132-fba9-43bc-80c5-01bb4ac44303-kube-api-access\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.439381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.439411 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-var-lock\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.439492 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.439606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-var-lock\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.460625 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d019e132-fba9-43bc-80c5-01bb4ac44303-kube-api-access\") pod \"installer-9-crc\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.528625 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:01 crc kubenswrapper[4760]: I1125 09:12:01.995710 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 09:12:02 crc kubenswrapper[4760]: I1125 09:12:02.544038 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d019e132-fba9-43bc-80c5-01bb4ac44303","Type":"ContainerStarted","Data":"bacdb7ebf5876d7728bb1bfc7aa564d8b496000144fc8c34be8c0704df1360e4"} Nov 25 09:12:02 crc kubenswrapper[4760]: I1125 09:12:02.544491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d019e132-fba9-43bc-80c5-01bb4ac44303","Type":"ContainerStarted","Data":"c45c81dbd2ab80598348600d0ab71287400aeebf94f62098abda5096cb000ebd"} Nov 25 09:12:02 crc kubenswrapper[4760]: I1125 09:12:02.566588 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.566562595 podStartE2EDuration="1.566562595s" podCreationTimestamp="2025-11-25 09:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:12:02.558914987 +0000 UTC m=+3656.267945802" watchObservedRunningTime="2025-11-25 09:12:02.566562595 +0000 UTC m=+3656.275593410" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.982210 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.983824 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984003 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984012 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984177 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36" gracePeriod=15 Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984186 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106" gracePeriod=15 Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984236 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d" gracePeriod=15 Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984307 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6" gracePeriod=15 Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984240 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873" gracePeriod=15 Nov 25 09:12:39 crc kubenswrapper[4760]: E1125 09:12:39.984680 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984700 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 09:12:39 crc kubenswrapper[4760]: E1125 09:12:39.984716 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984722 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 09:12:39 crc kubenswrapper[4760]: E1125 09:12:39.984738 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984744 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 09:12:39 crc kubenswrapper[4760]: E1125 09:12:39.984755 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984761 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 09:12:39 crc kubenswrapper[4760]: E1125 09:12:39.984769 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984775 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 09:12:39 crc kubenswrapper[4760]: E1125 09:12:39.984788 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984794 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 09:12:39 crc kubenswrapper[4760]: E1125 09:12:39.984807 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.984813 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.985012 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.985029 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.985037 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.985049 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.985058 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.985068 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 09:12:39 crc kubenswrapper[4760]: I1125 09:12:39.990001 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131067 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131411 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131466 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131497 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131514 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.131585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.235355 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.235461 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.235535 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.235507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.235560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.235633 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.235705 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.236199 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.236234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.236311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.236635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.236716 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.237037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.237109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.237156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.237230 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.881096 4760 generic.go:334] "Generic (PLEG): container finished" podID="d019e132-fba9-43bc-80c5-01bb4ac44303" containerID="bacdb7ebf5876d7728bb1bfc7aa564d8b496000144fc8c34be8c0704df1360e4" exitCode=0 Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.881186 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d019e132-fba9-43bc-80c5-01bb4ac44303","Type":"ContainerDied","Data":"bacdb7ebf5876d7728bb1bfc7aa564d8b496000144fc8c34be8c0704df1360e4"} Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.882367 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.884578 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.888377 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.889584 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106" exitCode=0 Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.889619 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6" exitCode=0 Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.889627 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873" exitCode=0 Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.889633 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d" exitCode=2 Nov 25 09:12:40 crc kubenswrapper[4760]: I1125 09:12:40.889670 4760 scope.go:117] "RemoveContainer" containerID="cba68ac42bf5c75ebc839f9326b1dc0f3b0d4bfdd024d4d4417000623f98aa7b" Nov 25 09:12:41 crc kubenswrapper[4760]: E1125 09:12:41.060784 4760 token_manager.go:121] "Couldn't update token" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/serviceaccounts/frr-k8s-daemon/token\": dial tcp 38.129.56.21:6443: connect: connection refused" cacheKey="\"frr-k8s-daemon\"/\"metallb-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"frr-k8s-webhook-server-6998585d5-fzx95\", UID:\"3531211f-bf66-45cb-9c5f-4a7aca2efbad\"}" Nov 25 09:12:41 crc kubenswrapper[4760]: I1125 09:12:41.902864 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.606575 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.607969 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.617839 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.618667 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.619339 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.619856 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.693962 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-var-lock\") pod \"d019e132-fba9-43bc-80c5-01bb4ac44303\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694094 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d019e132-fba9-43bc-80c5-01bb4ac44303-kube-api-access\") pod \"d019e132-fba9-43bc-80c5-01bb4ac44303\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694159 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-var-lock" (OuterVolumeSpecName: "var-lock") pod "d019e132-fba9-43bc-80c5-01bb4ac44303" (UID: "d019e132-fba9-43bc-80c5-01bb4ac44303"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694179 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694237 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694313 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694339 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.694454 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-kubelet-dir\") pod \"d019e132-fba9-43bc-80c5-01bb4ac44303\" (UID: \"d019e132-fba9-43bc-80c5-01bb4ac44303\") " Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.695317 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.695338 4760 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.695346 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.695355 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.695384 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d019e132-fba9-43bc-80c5-01bb4ac44303" (UID: "d019e132-fba9-43bc-80c5-01bb4ac44303"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.700666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d019e132-fba9-43bc-80c5-01bb4ac44303-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d019e132-fba9-43bc-80c5-01bb4ac44303" (UID: "d019e132-fba9-43bc-80c5-01bb4ac44303"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.797283 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d019e132-fba9-43bc-80c5-01bb4ac44303-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.797332 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d019e132-fba9-43bc-80c5-01bb4ac44303-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.913880 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d019e132-fba9-43bc-80c5-01bb4ac44303","Type":"ContainerDied","Data":"c45c81dbd2ab80598348600d0ab71287400aeebf94f62098abda5096cb000ebd"} Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.913923 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.913940 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45c81dbd2ab80598348600d0ab71287400aeebf94f62098abda5096cb000ebd" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.916565 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.917283 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36" exitCode=0 Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.917345 4760 scope.go:117] "RemoveContainer" containerID="007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.917408 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.928835 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.930291 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.932525 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.932803 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.950030 4760 scope.go:117] "RemoveContainer" containerID="936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.952476 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.974889 4760 scope.go:117] "RemoveContainer" containerID="2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873" Nov 25 09:12:42 crc kubenswrapper[4760]: I1125 09:12:42.999206 4760 scope.go:117] "RemoveContainer" containerID="e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.022393 4760 scope.go:117] "RemoveContainer" containerID="5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.045228 4760 scope.go:117] "RemoveContainer" containerID="f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.068329 4760 scope.go:117] "RemoveContainer" containerID="007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106" Nov 25 09:12:43 crc kubenswrapper[4760]: E1125 09:12:43.068818 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\": container with ID starting with 007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106 not found: ID does not exist" containerID="007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.068859 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106"} err="failed to get container status \"007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\": rpc error: code = NotFound desc = could not find container \"007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106\": container with ID starting with 007293dde823fa990b2ca37087ae62f66c58d97fbc9362bf1b1130e86ea97106 not found: ID does not exist" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.068883 4760 scope.go:117] "RemoveContainer" containerID="936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6" Nov 25 09:12:43 crc kubenswrapper[4760]: E1125 09:12:43.069553 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\": container with ID starting with 936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6 not found: ID does not exist" containerID="936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.069619 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6"} err="failed to get container status \"936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\": rpc error: code = NotFound desc = could not find container \"936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6\": container with ID starting with 936553498dc9f3cff8ca735670cb851ff0d8ea7a9b492d698044fe5a12f32cf6 not found: ID does not exist" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.069661 4760 scope.go:117] "RemoveContainer" containerID="2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873" Nov 25 09:12:43 crc kubenswrapper[4760]: E1125 09:12:43.070017 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\": container with ID starting with 2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873 not found: ID does not exist" containerID="2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.070052 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873"} err="failed to get container status \"2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\": rpc error: code = NotFound desc = could not find container \"2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873\": container with ID starting with 2fa77df7e03d38d31c68b7f5e82179414ceaa3e13e074a518c6d224db74a5873 not found: ID does not exist" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.070068 4760 scope.go:117] "RemoveContainer" containerID="e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d" Nov 25 09:12:43 crc kubenswrapper[4760]: E1125 09:12:43.070483 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\": container with ID starting with e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d not found: ID does not exist" containerID="e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.070533 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d"} err="failed to get container status \"e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\": rpc error: code = NotFound desc = could not find container \"e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d\": container with ID starting with e7b087f9c870d5bf6f2cbc9fd557ad710b99bc848f04b4cbd06a57543e6feb9d not found: ID does not exist" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.070554 4760 scope.go:117] "RemoveContainer" containerID="5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36" Nov 25 09:12:43 crc kubenswrapper[4760]: E1125 09:12:43.070950 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\": container with ID starting with 5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36 not found: ID does not exist" containerID="5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.070973 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36"} err="failed to get container status \"5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\": rpc error: code = NotFound desc = could not find container \"5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36\": container with ID starting with 5cec7cc7b1daf05cea110b450bcf73b367af4cec2ecdc25ac4c6345ce4e7ce36 not found: ID does not exist" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.070985 4760 scope.go:117] "RemoveContainer" containerID="f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0" Nov 25 09:12:43 crc kubenswrapper[4760]: E1125 09:12:43.071560 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\": container with ID starting with f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0 not found: ID does not exist" containerID="f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0" Nov 25 09:12:43 crc kubenswrapper[4760]: I1125 09:12:43.071616 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0"} err="failed to get container status \"f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\": rpc error: code = NotFound desc = could not find container \"f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0\": container with ID starting with f67d212aefc67ea93e474229df0b5b029f6f387e6a7790a0176a318763f629c0 not found: ID does not exist" Nov 25 09:12:43 crc kubenswrapper[4760]: E1125 09:12:43.961217 4760 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openstack/ovsdbserver-sb-0" volumeName="ovndbcluster-sb-etc-ovn" Nov 25 09:12:44 crc kubenswrapper[4760]: E1125 09:12:44.024723 4760 token_manager.go:121] "Couldn't update token" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/serviceaccounts/frr-k8s-daemon/token\": dial tcp 38.129.56.21:6443: connect: connection refused" cacheKey="\"frr-k8s-daemon\"/\"metallb-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"frr-k8s-pw649\", UID:\"6deb0467-1ded-4513-8aad-5a7b6c671895\"}" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.020129 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:45 crc kubenswrapper[4760]: I1125 09:12:45.021134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.057825 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b3502c4f3cd92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 09:12:45.056200082 +0000 UTC m=+3698.765230877,LastTimestamp:2025-11-25 09:12:45.056200082 +0000 UTC m=+3698.765230877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.432558 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.433690 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.434337 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.434922 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.435324 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:45 crc kubenswrapper[4760]: I1125 09:12:45.435365 4760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.435745 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="200ms" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.636588 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="400ms" Nov 25 09:12:45 crc kubenswrapper[4760]: I1125 09:12:45.959172 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d7fa2bbb4c070621a30840b407b5585b9527b02f41c32e3a016f270b1e8850e7"} Nov 25 09:12:45 crc kubenswrapper[4760]: I1125 09:12:45.959583 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5832439c6b6e992cf43c357c06437390e9964a71a4806b10a5ef5e0ea10d547e"} Nov 25 09:12:45 crc kubenswrapper[4760]: I1125 09:12:45.960410 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:45 crc kubenswrapper[4760]: E1125 09:12:45.961462 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:12:46 crc kubenswrapper[4760]: E1125 09:12:46.038116 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="800ms" Nov 25 09:12:46 crc kubenswrapper[4760]: E1125 09:12:46.839325 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="1.6s" Nov 25 09:12:46 crc kubenswrapper[4760]: I1125 09:12:46.944181 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:48 crc kubenswrapper[4760]: E1125 09:12:48.441200 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="3.2s" Nov 25 09:12:48 crc kubenswrapper[4760]: I1125 09:12:48.838479 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="bd20932f-cb28-4343-98df-425123f7c87f" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 09:12:51 crc kubenswrapper[4760]: E1125 09:12:51.616688 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b3502c4f3cd92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 09:12:45.056200082 +0000 UTC m=+3698.765230877,LastTimestamp:2025-11-25 09:12:45.056200082 +0000 UTC m=+3698.765230877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 09:12:51 crc kubenswrapper[4760]: E1125 09:12:51.642275 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="6.4s" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.019073 4760 generic.go:334] "Generic (PLEG): container finished" podID="394da4a0-f1c0-45c3-a31b-9cace1180c53" containerID="6cc6b60bae09c6fcf7bce286981c52b8bfa986c423e015710f5d573f8ae10db2" exitCode=1 Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.019178 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" event={"ID":"394da4a0-f1c0-45c3-a31b-9cace1180c53","Type":"ContainerDied","Data":"6cc6b60bae09c6fcf7bce286981c52b8bfa986c423e015710f5d573f8ae10db2"} Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.020332 4760 scope.go:117] "RemoveContainer" containerID="6cc6b60bae09c6fcf7bce286981c52b8bfa986c423e015710f5d573f8ae10db2" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.020484 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.020796 4760 status_manager.go:851] "Failed to get status for pod" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-76784bbdf-m7z64\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.938061 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.939186 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.939661 4760 status_manager.go:851] "Failed to get status for pod" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-76784bbdf-m7z64\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.961302 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.961740 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:12:52 crc kubenswrapper[4760]: E1125 09:12:52.962317 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:52 crc kubenswrapper[4760]: I1125 09:12:52.962725 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:53 crc kubenswrapper[4760]: W1125 09:12:53.001637 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-db8c2dabbd2bda4d3114bd493e336ea485503040139333fb78c8c73e784086fb WatchSource:0}: Error finding container db8c2dabbd2bda4d3114bd493e336ea485503040139333fb78c8c73e784086fb: Status 404 returned error can't find the container with id db8c2dabbd2bda4d3114bd493e336ea485503040139333fb78c8c73e784086fb Nov 25 09:12:53 crc kubenswrapper[4760]: E1125 09:12:53.011673 4760 token_manager.go:121] "Couldn't update token" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token\": dial tcp 38.129.56.21:6443: connect: connection refused" cacheKey="\"route-controller-manager-sa\"/\"openshift-route-controller-manager\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"route-controller-manager-6c7dd549d5-6dmlp\", UID:\"db0f8a2c-ba6f-449d-a264-cc0c7e0c5e53\"}" Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.041703 4760 generic.go:334] "Generic (PLEG): container finished" podID="394da4a0-f1c0-45c3-a31b-9cace1180c53" containerID="cbea14ee85403d952b615cae27edba12b1e9f01ef6fd8db4254a3ba49852c04d" exitCode=1 Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.041777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" event={"ID":"394da4a0-f1c0-45c3-a31b-9cace1180c53","Type":"ContainerDied","Data":"cbea14ee85403d952b615cae27edba12b1e9f01ef6fd8db4254a3ba49852c04d"} Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.041817 4760 scope.go:117] "RemoveContainer" containerID="6cc6b60bae09c6fcf7bce286981c52b8bfa986c423e015710f5d573f8ae10db2" Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.042502 4760 scope.go:117] "RemoveContainer" containerID="cbea14ee85403d952b615cae27edba12b1e9f01ef6fd8db4254a3ba49852c04d" Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.042513 4760 status_manager.go:851] "Failed to get status for pod" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-76784bbdf-m7z64\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.042694 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:53 crc kubenswrapper[4760]: E1125 09:12:53.042752 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-76784bbdf-m7z64_metallb-system(394da4a0-f1c0-45c3-a31b-9cace1180c53)\"" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.043943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"db8c2dabbd2bda4d3114bd493e336ea485503040139333fb78c8c73e784086fb"} Nov 25 09:12:53 crc kubenswrapper[4760]: I1125 09:12:53.832100 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.064199 4760 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d18536190b263128263d689265e19c9b12217f8e8e8b6cc06ab1a1a86650006d" exitCode=0 Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.064315 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d18536190b263128263d689265e19c9b12217f8e8e8b6cc06ab1a1a86650006d"} Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.064516 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.064536 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:12:54 crc kubenswrapper[4760]: E1125 09:12:54.064997 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.065627 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.066228 4760 status_manager.go:851] "Failed to get status for pod" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-76784bbdf-m7z64\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.067072 4760 scope.go:117] "RemoveContainer" containerID="cbea14ee85403d952b615cae27edba12b1e9f01ef6fd8db4254a3ba49852c04d" Nov 25 09:12:54 crc kubenswrapper[4760]: E1125 09:12:54.067301 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-76784bbdf-m7z64_metallb-system(394da4a0-f1c0-45c3-a31b-9cace1180c53)\"" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.067760 4760 status_manager.go:851] "Failed to get status for pod" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:54 crc kubenswrapper[4760]: I1125 09:12:54.068080 4760 status_manager.go:851] "Failed to get status for pod" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-76784bbdf-m7z64\": dial tcp 38.129.56.21:6443: connect: connection refused" Nov 25 09:12:55 crc kubenswrapper[4760]: I1125 09:12:55.079689 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1265b8e1d140d6c8516874e998eef81f61d576960f2a228f91e158e687527897"} Nov 25 09:12:55 crc kubenswrapper[4760]: I1125 09:12:55.080294 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6bf5dde40716ca061dcf872e33fb94810045bc72d8bb945d0f58429ad9565c5c"} Nov 25 09:12:55 crc kubenswrapper[4760]: I1125 09:12:55.080312 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d03988307fa5245c13ae8ef593bb9d3594e6bc0062518337931cfde237f336b3"} Nov 25 09:12:55 crc kubenswrapper[4760]: I1125 09:12:55.099226 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 09:12:55 crc kubenswrapper[4760]: I1125 09:12:55.099365 4760 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01" exitCode=1 Nov 25 09:12:55 crc kubenswrapper[4760]: I1125 09:12:55.099421 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01"} Nov 25 09:12:55 crc kubenswrapper[4760]: I1125 09:12:55.103851 4760 scope.go:117] "RemoveContainer" containerID="612b8f348dc98d038172e6ab97877c2564f2b3a25971c9760ec215fd789abe01" Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.112387 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e63755587b27e63b817f0dc78af972958c6bba29328eaf5226c4785f606b733"} Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.113659 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.113911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f430ccc4b080a1c710bb22679e2d9b85a71b200924f54ecde336caf2674c832d"} Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.112792 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.114019 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.130416 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.130478 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f9de73abc069b040f235ec98370d0e0b6d6142e0fcb153c4ff742a80e176efee"} Nov 25 09:12:56 crc kubenswrapper[4760]: I1125 09:12:56.829131 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 09:12:57 crc kubenswrapper[4760]: I1125 09:12:57.963767 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:57 crc kubenswrapper[4760]: I1125 09:12:57.964150 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:57 crc kubenswrapper[4760]: I1125 09:12:57.979072 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:12:58 crc kubenswrapper[4760]: I1125 09:12:58.875592 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="bd20932f-cb28-4343-98df-425123f7c87f" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.133055 4760 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.174811 4760 generic.go:334] "Generic (PLEG): container finished" podID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" containerID="c80d0f86ae9c63a6bfaf2e60dba603165038ea221ed371e05df3887f97c065df" exitCode=1 Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.175239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" event={"ID":"23471a89-c4fb-4e45-b7bb-2664e4ea99f3","Type":"ContainerDied","Data":"c80d0f86ae9c63a6bfaf2e60dba603165038ea221ed371e05df3887f97c065df"} Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.175718 4760 scope.go:117] "RemoveContainer" containerID="c80d0f86ae9c63a6bfaf2e60dba603165038ea221ed371e05df3887f97c065df" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.177160 4760 generic.go:334] "Generic (PLEG): container finished" podID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" containerID="0a118edce1f40fbbdd6a99feb6b0792560535a8e4c818798859296dcbbce765f" exitCode=1 Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.177239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" event={"ID":"1d556614-e3c1-4834-919a-0c6f5f5cc4de","Type":"ContainerDied","Data":"0a118edce1f40fbbdd6a99feb6b0792560535a8e4c818798859296dcbbce765f"} Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.177605 4760 scope.go:117] "RemoveContainer" containerID="0a118edce1f40fbbdd6a99feb6b0792560535a8e4c818798859296dcbbce765f" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.180861 4760 generic.go:334] "Generic (PLEG): container finished" podID="002e6b13-60c5-484c-8116-b4d5241ed678" containerID="ecb09c60390c5382a076a4d52832e1347803837617cc7a39429f6e75e369f0a6" exitCode=1 Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.180921 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" event={"ID":"002e6b13-60c5-484c-8116-b4d5241ed678","Type":"ContainerDied","Data":"ecb09c60390c5382a076a4d52832e1347803837617cc7a39429f6e75e369f0a6"} Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.181339 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.181357 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.181749 4760 scope.go:117] "RemoveContainer" containerID="ecb09c60390c5382a076a4d52832e1347803837617cc7a39429f6e75e369f0a6" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.185620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:13:01 crc kubenswrapper[4760]: I1125 09:13:01.236686 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2fd156f5-79f2-475c-8a3d-3c9a6c7890b9" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.191010 4760 generic.go:334] "Generic (PLEG): container finished" podID="8aea8bb6-720b-412a-acfc-f62366da5de5" containerID="bba1c0376c5c153ef9c035da71b8692fdf23af163211330cafffdcc7b4fdc3c5" exitCode=1 Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.191069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" event={"ID":"8aea8bb6-720b-412a-acfc-f62366da5de5","Type":"ContainerDied","Data":"bba1c0376c5c153ef9c035da71b8692fdf23af163211330cafffdcc7b4fdc3c5"} Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.193365 4760 scope.go:117] "RemoveContainer" containerID="bba1c0376c5c153ef9c035da71b8692fdf23af163211330cafffdcc7b4fdc3c5" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.194305 4760 generic.go:334] "Generic (PLEG): container finished" podID="002e6b13-60c5-484c-8116-b4d5241ed678" containerID="5a1c09aa44ace2d2787826b2246848237a891936c855509cd2aac9fd24069541" exitCode=1 Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.194376 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" event={"ID":"002e6b13-60c5-484c-8116-b4d5241ed678","Type":"ContainerDied","Data":"5a1c09aa44ace2d2787826b2246848237a891936c855509cd2aac9fd24069541"} Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.194427 4760 scope.go:117] "RemoveContainer" containerID="ecb09c60390c5382a076a4d52832e1347803837617cc7a39429f6e75e369f0a6" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.195330 4760 scope.go:117] "RemoveContainer" containerID="5a1c09aa44ace2d2787826b2246848237a891936c855509cd2aac9fd24069541" Nov 25 09:13:02 crc kubenswrapper[4760]: E1125 09:13:02.195615 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-54bpm_openstack-operators(002e6b13-60c5-484c-8116-b4d5241ed678)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.198524 4760 generic.go:334] "Generic (PLEG): container finished" podID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" containerID="e4243a7d630434fb3c8a541704a72bcde8912858215c199c8b8d166ba68d7290" exitCode=1 Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.198598 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" event={"ID":"23471a89-c4fb-4e45-b7bb-2664e4ea99f3","Type":"ContainerDied","Data":"e4243a7d630434fb3c8a541704a72bcde8912858215c199c8b8d166ba68d7290"} Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.199382 4760 scope.go:117] "RemoveContainer" containerID="e4243a7d630434fb3c8a541704a72bcde8912858215c199c8b8d166ba68d7290" Nov 25 09:13:02 crc kubenswrapper[4760]: E1125 09:13:02.199725 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-j5fsj_openstack-operators(23471a89-c4fb-4e45-b7bb-2664e4ea99f3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" podUID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.202383 4760 generic.go:334] "Generic (PLEG): container finished" podID="cef58941-ae6b-4624-af41-65ab598838eb" containerID="f78b709dbad6c6e20e05142a94c68fe4609db950922312a6d6a99c81f12b12ef" exitCode=1 Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.202500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" event={"ID":"cef58941-ae6b-4624-af41-65ab598838eb","Type":"ContainerDied","Data":"f78b709dbad6c6e20e05142a94c68fe4609db950922312a6d6a99c81f12b12ef"} Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.203279 4760 scope.go:117] "RemoveContainer" containerID="f78b709dbad6c6e20e05142a94c68fe4609db950922312a6d6a99c81f12b12ef" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.206534 4760 generic.go:334] "Generic (PLEG): container finished" podID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" containerID="27635df616130284a15eb94a27e9ebd98473f3d62df0e337b90391afbbd16971" exitCode=1 Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.206786 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.206802 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5b1e0cae-103c-4c99-bfde-5c974e0d674c" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.206920 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" event={"ID":"1d556614-e3c1-4834-919a-0c6f5f5cc4de","Type":"ContainerDied","Data":"27635df616130284a15eb94a27e9ebd98473f3d62df0e337b90391afbbd16971"} Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.207505 4760 scope.go:117] "RemoveContainer" containerID="27635df616130284a15eb94a27e9ebd98473f3d62df0e337b90391afbbd16971" Nov 25 09:13:02 crc kubenswrapper[4760]: E1125 09:13:02.207759 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-kw54v_openstack-operators(1d556614-e3c1-4834-919a-0c6f5f5cc4de)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" podUID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.307465 4760 scope.go:117] "RemoveContainer" containerID="c80d0f86ae9c63a6bfaf2e60dba603165038ea221ed371e05df3887f97c065df" Nov 25 09:13:02 crc kubenswrapper[4760]: I1125 09:13:02.515240 4760 scope.go:117] "RemoveContainer" containerID="0a118edce1f40fbbdd6a99feb6b0792560535a8e4c818798859296dcbbce765f" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.224025 4760 generic.go:334] "Generic (PLEG): container finished" podID="8aea8bb6-720b-412a-acfc-f62366da5de5" containerID="4ab285b77ae28bca1478bf4618bead2930f12450245e00f95e17ed480309a5a1" exitCode=1 Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.224108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" event={"ID":"8aea8bb6-720b-412a-acfc-f62366da5de5","Type":"ContainerDied","Data":"4ab285b77ae28bca1478bf4618bead2930f12450245e00f95e17ed480309a5a1"} Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.224149 4760 scope.go:117] "RemoveContainer" containerID="bba1c0376c5c153ef9c035da71b8692fdf23af163211330cafffdcc7b4fdc3c5" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.230081 4760 scope.go:117] "RemoveContainer" containerID="4ab285b77ae28bca1478bf4618bead2930f12450245e00f95e17ed480309a5a1" Nov 25 09:13:03 crc kubenswrapper[4760]: E1125 09:13:03.231158 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pmw6n_openstack-operators(8aea8bb6-720b-412a-acfc-f62366da5de5)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.232393 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" containerID="fdea6e7cda5309041600d82d5850e20001daf4f49aa39c5b2bf0aa27a453ca9a" exitCode=1 Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.232463 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" event={"ID":"4e773e83-c06c-47e9-8a34-ef72472e3ae8","Type":"ContainerDied","Data":"fdea6e7cda5309041600d82d5850e20001daf4f49aa39c5b2bf0aa27a453ca9a"} Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.232936 4760 scope.go:117] "RemoveContainer" containerID="fdea6e7cda5309041600d82d5850e20001daf4f49aa39c5b2bf0aa27a453ca9a" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.249280 4760 generic.go:334] "Generic (PLEG): container finished" podID="c43ab37e-375d-4000-8313-9ea135250641" containerID="62375c6e6c46b8016b2db27f4ad6c08e80140b22fe4b9645f5ad386d7d26929f" exitCode=1 Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.249416 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" event={"ID":"c43ab37e-375d-4000-8313-9ea135250641","Type":"ContainerDied","Data":"62375c6e6c46b8016b2db27f4ad6c08e80140b22fe4b9645f5ad386d7d26929f"} Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.250196 4760 scope.go:117] "RemoveContainer" containerID="62375c6e6c46b8016b2db27f4ad6c08e80140b22fe4b9645f5ad386d7d26929f" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.254053 4760 generic.go:334] "Generic (PLEG): container finished" podID="59482a15-4638-4508-b60c-1c60c8df6d09" containerID="c5cbe35e0f38c2d8743b4705cf0e0dd18fb4499fb5307499421fb426460d6a49" exitCode=1 Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.254154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" event={"ID":"59482a15-4638-4508-b60c-1c60c8df6d09","Type":"ContainerDied","Data":"c5cbe35e0f38c2d8743b4705cf0e0dd18fb4499fb5307499421fb426460d6a49"} Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.257347 4760 scope.go:117] "RemoveContainer" containerID="c5cbe35e0f38c2d8743b4705cf0e0dd18fb4499fb5307499421fb426460d6a49" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.258062 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.260894 4760 generic.go:334] "Generic (PLEG): container finished" podID="cef58941-ae6b-4624-af41-65ab598838eb" containerID="e9c742beaa955660cce157c3f9fb47c1e4cf20171acc817ba09189c5d70db486" exitCode=1 Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.260943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" event={"ID":"cef58941-ae6b-4624-af41-65ab598838eb","Type":"ContainerDied","Data":"e9c742beaa955660cce157c3f9fb47c1e4cf20171acc817ba09189c5d70db486"} Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.261560 4760 scope.go:117] "RemoveContainer" containerID="e9c742beaa955660cce157c3f9fb47c1e4cf20171acc817ba09189c5d70db486" Nov 25 09:13:03 crc kubenswrapper[4760]: E1125 09:13:03.261876 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-plxrr_openstack-operators(cef58941-ae6b-4624-af41-65ab598838eb)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.262079 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 09:13:03 crc kubenswrapper[4760]: I1125 09:13:03.425665 4760 scope.go:117] "RemoveContainer" containerID="f78b709dbad6c6e20e05142a94c68fe4609db950922312a6d6a99c81f12b12ef" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.059072 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.060311 4760 scope.go:117] "RemoveContainer" containerID="27635df616130284a15eb94a27e9ebd98473f3d62df0e337b90391afbbd16971" Nov 25 09:13:04 crc kubenswrapper[4760]: E1125 09:13:04.060601 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-kw54v_openstack-operators(1d556614-e3c1-4834-919a-0c6f5f5cc4de)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" podUID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.068071 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" podUID="6dde35ac-ff01-4e46-9eae-234e6abc37dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.271795 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4325bd6-c276-4fbc-bc67-cf5a026c3537" containerID="3bf03df4953d259610af803731deb9aaf22d28bcc3b549ed11c7093e123d5b4a" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.271876 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" event={"ID":"b4325bd6-c276-4fbc-bc67-cf5a026c3537","Type":"ContainerDied","Data":"3bf03df4953d259610af803731deb9aaf22d28bcc3b549ed11c7093e123d5b4a"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.273901 4760 scope.go:117] "RemoveContainer" containerID="3bf03df4953d259610af803731deb9aaf22d28bcc3b549ed11c7093e123d5b4a" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.276194 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a9ee81-2733-444d-8edc-ddb1303b5686" containerID="6cc06ddc45048296f24515a3ee7d592b625eb990c80087a56169b579e5f0d1c1" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.276270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" event={"ID":"03a9ee81-2733-444d-8edc-ddb1303b5686","Type":"ContainerDied","Data":"6cc06ddc45048296f24515a3ee7d592b625eb990c80087a56169b579e5f0d1c1"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.276970 4760 scope.go:117] "RemoveContainer" containerID="6cc06ddc45048296f24515a3ee7d592b625eb990c80087a56169b579e5f0d1c1" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.285013 4760 generic.go:334] "Generic (PLEG): container finished" podID="6dde35ac-ff01-4e46-9eae-234e6abc37dc" containerID="365799acb56992f20ec49ef9a96eb81e58cf921aa96746555ab528df55407607" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.285212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" event={"ID":"6dde35ac-ff01-4e46-9eae-234e6abc37dc","Type":"ContainerDied","Data":"365799acb56992f20ec49ef9a96eb81e58cf921aa96746555ab528df55407607"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.286085 4760 scope.go:117] "RemoveContainer" containerID="365799acb56992f20ec49ef9a96eb81e58cf921aa96746555ab528df55407607" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.291504 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" containerID="5dc79c53c3c03ca9b6303fb264d81aff4b24ad2046efcd243289517f5eddc3da" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.291580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" event={"ID":"4e773e83-c06c-47e9-8a34-ef72472e3ae8","Type":"ContainerDied","Data":"5dc79c53c3c03ca9b6303fb264d81aff4b24ad2046efcd243289517f5eddc3da"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.291640 4760 scope.go:117] "RemoveContainer" containerID="fdea6e7cda5309041600d82d5850e20001daf4f49aa39c5b2bf0aa27a453ca9a" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.292452 4760 scope.go:117] "RemoveContainer" containerID="5dc79c53c3c03ca9b6303fb264d81aff4b24ad2046efcd243289517f5eddc3da" Nov 25 09:13:04 crc kubenswrapper[4760]: E1125 09:13:04.292741 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-cxjcf_openstack-operators(4e773e83-c06c-47e9-8a34-ef72472e3ae8)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.295017 4760 generic.go:334] "Generic (PLEG): container finished" podID="c43ab37e-375d-4000-8313-9ea135250641" containerID="29f29daba05aae1427f522161eedad14a829ee85ddd5a85bfbb28d962e5d59df" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.295086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" event={"ID":"c43ab37e-375d-4000-8313-9ea135250641","Type":"ContainerDied","Data":"29f29daba05aae1427f522161eedad14a829ee85ddd5a85bfbb28d962e5d59df"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.295757 4760 scope.go:117] "RemoveContainer" containerID="29f29daba05aae1427f522161eedad14a829ee85ddd5a85bfbb28d962e5d59df" Nov 25 09:13:04 crc kubenswrapper[4760]: E1125 09:13:04.295998 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-wmmn4_openstack-operators(c43ab37e-375d-4000-8313-9ea135250641)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" podUID="c43ab37e-375d-4000-8313-9ea135250641" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.299272 4760 generic.go:334] "Generic (PLEG): container finished" podID="9291524e-d650-4366-b795-162d53bf2815" containerID="3773418666d6c1aa765b32572dc5f7d2064dce044f3934d02959369d7bc6b072" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.299288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" event={"ID":"9291524e-d650-4366-b795-162d53bf2815","Type":"ContainerDied","Data":"3773418666d6c1aa765b32572dc5f7d2064dce044f3934d02959369d7bc6b072"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.300205 4760 scope.go:117] "RemoveContainer" containerID="3773418666d6c1aa765b32572dc5f7d2064dce044f3934d02959369d7bc6b072" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.302692 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe16fe4f-1740-4d43-a0d2-0d1d649c853c" containerID="51e93dee920c6e6dc3b6c12a306270bbf1314dd407485389b62cbca7989f0403" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.302779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" event={"ID":"fe16fe4f-1740-4d43-a0d2-0d1d649c853c","Type":"ContainerDied","Data":"51e93dee920c6e6dc3b6c12a306270bbf1314dd407485389b62cbca7989f0403"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.303541 4760 scope.go:117] "RemoveContainer" containerID="51e93dee920c6e6dc3b6c12a306270bbf1314dd407485389b62cbca7989f0403" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.306891 4760 generic.go:334] "Generic (PLEG): container finished" podID="65361481-df4d-4010-a478-91fd2c50d9e6" containerID="963bf83dfc51cf642f3f5f4f3376f99812aecbff49365eaf6e05541ce2015fe4" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.306956 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" event={"ID":"65361481-df4d-4010-a478-91fd2c50d9e6","Type":"ContainerDied","Data":"963bf83dfc51cf642f3f5f4f3376f99812aecbff49365eaf6e05541ce2015fe4"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.307593 4760 scope.go:117] "RemoveContainer" containerID="963bf83dfc51cf642f3f5f4f3376f99812aecbff49365eaf6e05541ce2015fe4" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.309002 4760 generic.go:334] "Generic (PLEG): container finished" podID="a9a9b42e-4d3b-495e-804e-af02af05581d" containerID="1ed89f44c4cb3d5308462671a5bfdb712260c4f51b4f04768897c8a3c4d206f6" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.309074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" event={"ID":"a9a9b42e-4d3b-495e-804e-af02af05581d","Type":"ContainerDied","Data":"1ed89f44c4cb3d5308462671a5bfdb712260c4f51b4f04768897c8a3c4d206f6"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.309645 4760 scope.go:117] "RemoveContainer" containerID="1ed89f44c4cb3d5308462671a5bfdb712260c4f51b4f04768897c8a3c4d206f6" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.315240 4760 generic.go:334] "Generic (PLEG): container finished" podID="f531ae0e-78ad-4d2c-951f-0d1f7d1c8129" containerID="53225b6ce3c8a83c4ad7786e8ecd947b524c027578b930d4ea2430a141b6896b" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.315297 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" event={"ID":"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129","Type":"ContainerDied","Data":"53225b6ce3c8a83c4ad7786e8ecd947b524c027578b930d4ea2430a141b6896b"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.316279 4760 scope.go:117] "RemoveContainer" containerID="53225b6ce3c8a83c4ad7786e8ecd947b524c027578b930d4ea2430a141b6896b" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.323770 4760 generic.go:334] "Generic (PLEG): container finished" podID="f0f31412-34be-4b9d-8df1-b53d23abb1f6" containerID="a42195511db5c10b0b1cb254dbbfec7cd13dc0f4b1a554e46fa8f18c39064ba7" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.323823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" event={"ID":"f0f31412-34be-4b9d-8df1-b53d23abb1f6","Type":"ContainerDied","Data":"a42195511db5c10b0b1cb254dbbfec7cd13dc0f4b1a554e46fa8f18c39064ba7"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.324187 4760 scope.go:117] "RemoveContainer" containerID="a42195511db5c10b0b1cb254dbbfec7cd13dc0f4b1a554e46fa8f18c39064ba7" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.325635 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.328663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" event={"ID":"59482a15-4638-4508-b60c-1c60c8df6d09","Type":"ContainerStarted","Data":"5ad686b6a41333a64c40f91a90fe74aea66605fe04f6d06ef53ef8e45a074b4c"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.329365 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.335403 4760 generic.go:334] "Generic (PLEG): container finished" podID="25f372bf-e250-492b-abb9-680b1efdbdec" containerID="78d35fa844f9306bf9e9c781f238abe91ee4e07a9af371de8b90edc168d0f3fc" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.335472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" event={"ID":"25f372bf-e250-492b-abb9-680b1efdbdec","Type":"ContainerDied","Data":"78d35fa844f9306bf9e9c781f238abe91ee4e07a9af371de8b90edc168d0f3fc"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.336219 4760 scope.go:117] "RemoveContainer" containerID="78d35fa844f9306bf9e9c781f238abe91ee4e07a9af371de8b90edc168d0f3fc" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.345647 4760 generic.go:334] "Generic (PLEG): container finished" podID="97e97ce2-b50b-478e-acb2-cbdd5232d67c" containerID="476af3dd083d0a100d050519fda6d03ee35e63ecb50e5fd2c9e8258c54fc91bc" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.345768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" event={"ID":"97e97ce2-b50b-478e-acb2-cbdd5232d67c","Type":"ContainerDied","Data":"476af3dd083d0a100d050519fda6d03ee35e63ecb50e5fd2c9e8258c54fc91bc"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.347194 4760 scope.go:117] "RemoveContainer" containerID="476af3dd083d0a100d050519fda6d03ee35e63ecb50e5fd2c9e8258c54fc91bc" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.365714 4760 generic.go:334] "Generic (PLEG): container finished" podID="33faed21-8b19-4064-a6e2-5064ce8cbab2" containerID="f374509f532646a61b20dd3beddfed971429fa3250d97e3645c9b0a746a8e178" exitCode=1 Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.366436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" event={"ID":"33faed21-8b19-4064-a6e2-5064ce8cbab2","Type":"ContainerDied","Data":"f374509f532646a61b20dd3beddfed971429fa3250d97e3645c9b0a746a8e178"} Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.367182 4760 scope.go:117] "RemoveContainer" containerID="f374509f532646a61b20dd3beddfed971429fa3250d97e3645c9b0a746a8e178" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.463418 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.534635 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.603111 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.653345 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.654212 4760 scope.go:117] "RemoveContainer" containerID="5a1c09aa44ace2d2787826b2246848237a891936c855509cd2aac9fd24069541" Nov 25 09:13:04 crc kubenswrapper[4760]: E1125 09:13:04.654485 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-54bpm_openstack-operators(002e6b13-60c5-484c-8116-b4d5241ed678)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.681086 4760 scope.go:117] "RemoveContainer" containerID="62375c6e6c46b8016b2db27f4ad6c08e80140b22fe4b9645f5ad386d7d26929f" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.739930 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.740562 4760 scope.go:117] "RemoveContainer" containerID="e4243a7d630434fb3c8a541704a72bcde8912858215c199c8b8d166ba68d7290" Nov 25 09:13:04 crc kubenswrapper[4760]: E1125 09:13:04.740786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-j5fsj_openstack-operators(23471a89-c4fb-4e45-b7bb-2664e4ea99f3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" podUID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.762700 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.825216 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.825888 4760 scope.go:117] "RemoveContainer" containerID="4ab285b77ae28bca1478bf4618bead2930f12450245e00f95e17ed480309a5a1" Nov 25 09:13:04 crc kubenswrapper[4760]: E1125 09:13:04.827185 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pmw6n_openstack-operators(8aea8bb6-720b-412a-acfc-f62366da5de5)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.873975 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" podUID="6d9d0ad6-0976-4f14-81fb-f286f6768256" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.896886 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.898075 4760 scope.go:117] "RemoveContainer" containerID="e9c742beaa955660cce157c3f9fb47c1e4cf20171acc817ba09189c5d70db486" Nov 25 09:13:04 crc kubenswrapper[4760]: E1125 09:13:04.898403 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-plxrr_openstack-operators(cef58941-ae6b-4624-af41-65ab598838eb)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.941937 4760 scope.go:117] "RemoveContainer" containerID="cbea14ee85403d952b615cae27edba12b1e9f01ef6fd8db4254a3ba49852c04d" Nov 25 09:13:04 crc kubenswrapper[4760]: I1125 09:13:04.977140 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" podUID="0f496ee1-ca51-427f-a51d-4fc214c7f50a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": dial tcp 10.217.0.93:8081: connect: connection refused" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.377751 4760 generic.go:334] "Generic (PLEG): container finished" podID="6dde35ac-ff01-4e46-9eae-234e6abc37dc" containerID="0332ad36cd62f0151d1d92f9e0ecff9e2b50385a38068b6f6a37c73f897293eb" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.377814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" event={"ID":"6dde35ac-ff01-4e46-9eae-234e6abc37dc","Type":"ContainerDied","Data":"0332ad36cd62f0151d1d92f9e0ecff9e2b50385a38068b6f6a37c73f897293eb"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.378149 4760 scope.go:117] "RemoveContainer" containerID="365799acb56992f20ec49ef9a96eb81e58cf921aa96746555ab528df55407607" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.379113 4760 scope.go:117] "RemoveContainer" containerID="0332ad36cd62f0151d1d92f9e0ecff9e2b50385a38068b6f6a37c73f897293eb" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.379553 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-x7r44_openstack-operators(6dde35ac-ff01-4e46-9eae-234e6abc37dc)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" podUID="6dde35ac-ff01-4e46-9eae-234e6abc37dc" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.381848 4760 generic.go:334] "Generic (PLEG): container finished" podID="33faed21-8b19-4064-a6e2-5064ce8cbab2" containerID="6da083de3278fe701ddf2d001a9498b330aa9702e5bc373edddd2c153cb45a79" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.381893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" event={"ID":"33faed21-8b19-4064-a6e2-5064ce8cbab2","Type":"ContainerDied","Data":"6da083de3278fe701ddf2d001a9498b330aa9702e5bc373edddd2c153cb45a79"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.382951 4760 scope.go:117] "RemoveContainer" containerID="6da083de3278fe701ddf2d001a9498b330aa9702e5bc373edddd2c153cb45a79" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.383273 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-njfjf_openstack-operators(33faed21-8b19-4064-a6e2-5064ce8cbab2)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" podUID="33faed21-8b19-4064-a6e2-5064ce8cbab2" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.386731 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4325bd6-c276-4fbc-bc67-cf5a026c3537" containerID="447b600e223dd023afc665bbc8e69de26c4c58899c9aae857fc26a2bd42f8ad3" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.386791 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" event={"ID":"b4325bd6-c276-4fbc-bc67-cf5a026c3537","Type":"ContainerDied","Data":"447b600e223dd023afc665bbc8e69de26c4c58899c9aae857fc26a2bd42f8ad3"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.387516 4760 scope.go:117] "RemoveContainer" containerID="447b600e223dd023afc665bbc8e69de26c4c58899c9aae857fc26a2bd42f8ad3" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.387784 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-l24ns_openstack-operators(b4325bd6-c276-4fbc-bc67-cf5a026c3537)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" podUID="b4325bd6-c276-4fbc-bc67-cf5a026c3537" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.391329 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f496ee1-ca51-427f-a51d-4fc214c7f50a" containerID="4c25e20700be96ef60479e8ae592b3174f0611826b3ba3aefc1c35ce0702f23b" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.391405 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" event={"ID":"0f496ee1-ca51-427f-a51d-4fc214c7f50a","Type":"ContainerDied","Data":"4c25e20700be96ef60479e8ae592b3174f0611826b3ba3aefc1c35ce0702f23b"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.392045 4760 scope.go:117] "RemoveContainer" containerID="4c25e20700be96ef60479e8ae592b3174f0611826b3ba3aefc1c35ce0702f23b" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.396569 4760 generic.go:334] "Generic (PLEG): container finished" podID="890067e5-2be8-4699-8d90-f2771ef453e5" containerID="373db6e0c2b67d0d63ddfbebfb084a2bfedd2006f38f42a6725dd6dfcadf172d" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.396627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" event={"ID":"890067e5-2be8-4699-8d90-f2771ef453e5","Type":"ContainerDied","Data":"373db6e0c2b67d0d63ddfbebfb084a2bfedd2006f38f42a6725dd6dfcadf172d"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.397150 4760 scope.go:117] "RemoveContainer" containerID="373db6e0c2b67d0d63ddfbebfb084a2bfedd2006f38f42a6725dd6dfcadf172d" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.401591 4760 generic.go:334] "Generic (PLEG): container finished" podID="f531ae0e-78ad-4d2c-951f-0d1f7d1c8129" containerID="ab055c959ebc58c38c3f6418b80043684df193808b9c84c36395f001a056cc52" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.401662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" event={"ID":"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129","Type":"ContainerDied","Data":"ab055c959ebc58c38c3f6418b80043684df193808b9c84c36395f001a056cc52"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.402316 4760 scope.go:117] "RemoveContainer" containerID="ab055c959ebc58c38c3f6418b80043684df193808b9c84c36395f001a056cc52" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.402652 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-xghfv_openstack-operators(f531ae0e-78ad-4d2c-951f-0d1f7d1c8129)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" podUID="f531ae0e-78ad-4d2c-951f-0d1f7d1c8129" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.407755 4760 generic.go:334] "Generic (PLEG): container finished" podID="97e97ce2-b50b-478e-acb2-cbdd5232d67c" containerID="fd2d0dab86af05c5c2d2aabf17e5d41e875adad32438772f152ba7f816f278c9" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.407826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" event={"ID":"97e97ce2-b50b-478e-acb2-cbdd5232d67c","Type":"ContainerDied","Data":"fd2d0dab86af05c5c2d2aabf17e5d41e875adad32438772f152ba7f816f278c9"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.408442 4760 scope.go:117] "RemoveContainer" containerID="fd2d0dab86af05c5c2d2aabf17e5d41e875adad32438772f152ba7f816f278c9" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.408711 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-hlbbf_openstack-operators(97e97ce2-b50b-478e-acb2-cbdd5232d67c)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" podUID="97e97ce2-b50b-478e-acb2-cbdd5232d67c" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.413893 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d9d0ad6-0976-4f14-81fb-f286f6768256" containerID="6e207003e9ae45ccb1d185f3779b5f1df6eddda369ea72f52d6ad2038552cbcf" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.413995 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" event={"ID":"6d9d0ad6-0976-4f14-81fb-f286f6768256","Type":"ContainerDied","Data":"6e207003e9ae45ccb1d185f3779b5f1df6eddda369ea72f52d6ad2038552cbcf"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.414781 4760 scope.go:117] "RemoveContainer" containerID="6e207003e9ae45ccb1d185f3779b5f1df6eddda369ea72f52d6ad2038552cbcf" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.428151 4760 generic.go:334] "Generic (PLEG): container finished" podID="25f372bf-e250-492b-abb9-680b1efdbdec" containerID="902a96544371b54618f88c968eed66414cf3c63adf45daf40f7664422ce263e4" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.428229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" event={"ID":"25f372bf-e250-492b-abb9-680b1efdbdec","Type":"ContainerDied","Data":"902a96544371b54618f88c968eed66414cf3c63adf45daf40f7664422ce263e4"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.428919 4760 scope.go:117] "RemoveContainer" containerID="902a96544371b54618f88c968eed66414cf3c63adf45daf40f7664422ce263e4" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.429217 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-6cjlz_openstack-operators(25f372bf-e250-492b-abb9-680b1efdbdec)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" podUID="25f372bf-e250-492b-abb9-680b1efdbdec" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.433751 4760 generic.go:334] "Generic (PLEG): container finished" podID="042ed3e8-ea28-44f7-9859-2d0a1d5c3e17" containerID="c3034fbbefe56c0e2a72391bfc1a76e9e30f47ddad367e337c4feaa782852fc1" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.433815 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" event={"ID":"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17","Type":"ContainerDied","Data":"c3034fbbefe56c0e2a72391bfc1a76e9e30f47ddad367e337c4feaa782852fc1"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.434584 4760 scope.go:117] "RemoveContainer" containerID="c3034fbbefe56c0e2a72391bfc1a76e9e30f47ddad367e337c4feaa782852fc1" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.449877 4760 generic.go:334] "Generic (PLEG): container finished" podID="394da4a0-f1c0-45c3-a31b-9cace1180c53" containerID="b654ec4170d2055e08b7e47ff2fdbdc9e608d9eef0243f166e2b0b54705c0ffe" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.450154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" event={"ID":"394da4a0-f1c0-45c3-a31b-9cace1180c53","Type":"ContainerDied","Data":"b654ec4170d2055e08b7e47ff2fdbdc9e608d9eef0243f166e2b0b54705c0ffe"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.453032 4760 scope.go:117] "RemoveContainer" containerID="b654ec4170d2055e08b7e47ff2fdbdc9e608d9eef0243f166e2b0b54705c0ffe" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.453613 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-76784bbdf-m7z64_metallb-system(394da4a0-f1c0-45c3-a31b-9cace1180c53)\"" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.462079 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" event={"ID":"fe16fe4f-1740-4d43-a0d2-0d1d649c853c","Type":"ContainerStarted","Data":"f23601d5ec6afa2b361aeabb390892c4d1de8e48b784ec7e8f71707bbbdd6d8b"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.463496 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.466698 4760 generic.go:334] "Generic (PLEG): container finished" podID="f0f31412-34be-4b9d-8df1-b53d23abb1f6" containerID="3d99b6d8a7383bdefe54cd1a026e7097a65535a34199b1ee4dacfcae39e2720f" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.466800 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" event={"ID":"f0f31412-34be-4b9d-8df1-b53d23abb1f6","Type":"ContainerDied","Data":"3d99b6d8a7383bdefe54cd1a026e7097a65535a34199b1ee4dacfcae39e2720f"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.467752 4760 scope.go:117] "RemoveContainer" containerID="3d99b6d8a7383bdefe54cd1a026e7097a65535a34199b1ee4dacfcae39e2720f" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.468331 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-s4q64_openstack-operators(f0f31412-34be-4b9d-8df1-b53d23abb1f6)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" podUID="f0f31412-34be-4b9d-8df1-b53d23abb1f6" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.468866 4760 generic.go:334] "Generic (PLEG): container finished" podID="a9a9b42e-4d3b-495e-804e-af02af05581d" containerID="6c7abdb8700480307bf1f4b846e81df8ab467d7fc1fe2e48daeb5295bfe5c724" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.468933 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" event={"ID":"a9a9b42e-4d3b-495e-804e-af02af05581d","Type":"ContainerDied","Data":"6c7abdb8700480307bf1f4b846e81df8ab467d7fc1fe2e48daeb5295bfe5c724"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.469636 4760 scope.go:117] "RemoveContainer" containerID="f374509f532646a61b20dd3beddfed971429fa3250d97e3645c9b0a746a8e178" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.469770 4760 scope.go:117] "RemoveContainer" containerID="6c7abdb8700480307bf1f4b846e81df8ab467d7fc1fe2e48daeb5295bfe5c724" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.470089 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-5crqc_openstack-operators(a9a9b42e-4d3b-495e-804e-af02af05581d)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" podUID="a9a9b42e-4d3b-495e-804e-af02af05581d" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.476407 4760 generic.go:334] "Generic (PLEG): container finished" podID="9291524e-d650-4366-b795-162d53bf2815" containerID="57e9dcb648b2a4ac82fa82cee59ed3cf96406bf175d172c9b7314f1d9ca12797" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.476466 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" event={"ID":"9291524e-d650-4366-b795-162d53bf2815","Type":"ContainerDied","Data":"57e9dcb648b2a4ac82fa82cee59ed3cf96406bf175d172c9b7314f1d9ca12797"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.477070 4760 scope.go:117] "RemoveContainer" containerID="57e9dcb648b2a4ac82fa82cee59ed3cf96406bf175d172c9b7314f1d9ca12797" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.477336 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-l7cv5_openstack-operators(9291524e-d650-4366-b795-162d53bf2815)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" podUID="9291524e-d650-4366-b795-162d53bf2815" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.481386 4760 generic.go:334] "Generic (PLEG): container finished" podID="03a9ee81-2733-444d-8edc-ddb1303b5686" containerID="43da6dce3d2a1f1a08db7b86e53bda72c9429ed1c7d6b52f142975e6ba214b68" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.481453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" event={"ID":"03a9ee81-2733-444d-8edc-ddb1303b5686","Type":"ContainerDied","Data":"43da6dce3d2a1f1a08db7b86e53bda72c9429ed1c7d6b52f142975e6ba214b68"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.481941 4760 scope.go:117] "RemoveContainer" containerID="43da6dce3d2a1f1a08db7b86e53bda72c9429ed1c7d6b52f142975e6ba214b68" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.482189 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-k4dk2_openstack-operators(03a9ee81-2733-444d-8edc-ddb1303b5686)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" podUID="03a9ee81-2733-444d-8edc-ddb1303b5686" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.485121 4760 generic.go:334] "Generic (PLEG): container finished" podID="65361481-df4d-4010-a478-91fd2c50d9e6" containerID="6e0de700541ea6121dc2ad91e71739be35033d9324a1f873bcee0cb6e1cd336b" exitCode=1 Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.485175 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" event={"ID":"65361481-df4d-4010-a478-91fd2c50d9e6","Type":"ContainerDied","Data":"6e0de700541ea6121dc2ad91e71739be35033d9324a1f873bcee0cb6e1cd336b"} Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.485651 4760 scope.go:117] "RemoveContainer" containerID="6e0de700541ea6121dc2ad91e71739be35033d9324a1f873bcee0cb6e1cd336b" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.485951 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-wvv98_openstack-operators(65361481-df4d-4010-a478-91fd2c50d9e6)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" podUID="65361481-df4d-4010-a478-91fd2c50d9e6" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.492084 4760 scope.go:117] "RemoveContainer" containerID="5dc79c53c3c03ca9b6303fb264d81aff4b24ad2046efcd243289517f5eddc3da" Nov 25 09:13:05 crc kubenswrapper[4760]: E1125 09:13:05.492344 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-cxjcf_openstack-operators(4e773e83-c06c-47e9-8a34-ef72472e3ae8)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.657599 4760 scope.go:117] "RemoveContainer" containerID="3bf03df4953d259610af803731deb9aaf22d28bcc3b549ed11c7093e123d5b4a" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.691799 4760 scope.go:117] "RemoveContainer" containerID="53225b6ce3c8a83c4ad7786e8ecd947b524c027578b930d4ea2430a141b6896b" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.733349 4760 scope.go:117] "RemoveContainer" containerID="476af3dd083d0a100d050519fda6d03ee35e63ecb50e5fd2c9e8258c54fc91bc" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.758521 4760 scope.go:117] "RemoveContainer" containerID="78d35fa844f9306bf9e9c781f238abe91ee4e07a9af371de8b90edc168d0f3fc" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.792291 4760 scope.go:117] "RemoveContainer" containerID="cbea14ee85403d952b615cae27edba12b1e9f01ef6fd8db4254a3ba49852c04d" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.820861 4760 scope.go:117] "RemoveContainer" containerID="a42195511db5c10b0b1cb254dbbfec7cd13dc0f4b1a554e46fa8f18c39064ba7" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.850687 4760 scope.go:117] "RemoveContainer" containerID="1ed89f44c4cb3d5308462671a5bfdb712260c4f51b4f04768897c8a3c4d206f6" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.885951 4760 scope.go:117] "RemoveContainer" containerID="3773418666d6c1aa765b32572dc5f7d2064dce044f3934d02959369d7bc6b072" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.928562 4760 scope.go:117] "RemoveContainer" containerID="6cc06ddc45048296f24515a3ee7d592b625eb990c80087a56169b579e5f0d1c1" Nov 25 09:13:05 crc kubenswrapper[4760]: I1125 09:13:05.959182 4760 scope.go:117] "RemoveContainer" containerID="963bf83dfc51cf642f3f5f4f3376f99812aecbff49365eaf6e05541ce2015fe4" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.502785 4760 generic.go:334] "Generic (PLEG): container finished" podID="890067e5-2be8-4699-8d90-f2771ef453e5" containerID="25412b31aca55ca851c51777d6bbcd451a20170fc242f9b770c57bdf4b3c140a" exitCode=1 Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.502848 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" event={"ID":"890067e5-2be8-4699-8d90-f2771ef453e5","Type":"ContainerDied","Data":"25412b31aca55ca851c51777d6bbcd451a20170fc242f9b770c57bdf4b3c140a"} Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.502877 4760 scope.go:117] "RemoveContainer" containerID="373db6e0c2b67d0d63ddfbebfb084a2bfedd2006f38f42a6725dd6dfcadf172d" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.503560 4760 scope.go:117] "RemoveContainer" containerID="25412b31aca55ca851c51777d6bbcd451a20170fc242f9b770c57bdf4b3c140a" Nov 25 09:13:06 crc kubenswrapper[4760]: E1125 09:13:06.503794 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-l28cr_openstack-operators(890067e5-2be8-4699-8d90-f2771ef453e5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" podUID="890067e5-2be8-4699-8d90-f2771ef453e5" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.516749 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" event={"ID":"042ed3e8-ea28-44f7-9859-2d0a1d5c3e17","Type":"ContainerStarted","Data":"99ffdce7296dd164f4bf2c0e8407968a5e26cc2a13a931ce8c2ebcce68c643eb"} Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.517669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.539359 4760 scope.go:117] "RemoveContainer" containerID="6da083de3278fe701ddf2d001a9498b330aa9702e5bc373edddd2c153cb45a79" Nov 25 09:13:06 crc kubenswrapper[4760]: E1125 09:13:06.539612 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-njfjf_openstack-operators(33faed21-8b19-4064-a6e2-5064ce8cbab2)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" podUID="33faed21-8b19-4064-a6e2-5064ce8cbab2" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.543208 4760 scope.go:117] "RemoveContainer" containerID="57e9dcb648b2a4ac82fa82cee59ed3cf96406bf175d172c9b7314f1d9ca12797" Nov 25 09:13:06 crc kubenswrapper[4760]: E1125 09:13:06.543609 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-l7cv5_openstack-operators(9291524e-d650-4366-b795-162d53bf2815)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" podUID="9291524e-d650-4366-b795-162d53bf2815" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.546756 4760 generic.go:334] "Generic (PLEG): container finished" podID="0f496ee1-ca51-427f-a51d-4fc214c7f50a" containerID="b7f2d5db183f786726bbec0d16baa35f5a6aebb093333bcf767491f62c6fc104" exitCode=1 Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.546826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" event={"ID":"0f496ee1-ca51-427f-a51d-4fc214c7f50a","Type":"ContainerDied","Data":"b7f2d5db183f786726bbec0d16baa35f5a6aebb093333bcf767491f62c6fc104"} Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.547509 4760 scope.go:117] "RemoveContainer" containerID="b7f2d5db183f786726bbec0d16baa35f5a6aebb093333bcf767491f62c6fc104" Nov 25 09:13:06 crc kubenswrapper[4760]: E1125 09:13:06.547807 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-cr5ch_openstack-operators(0f496ee1-ca51-427f-a51d-4fc214c7f50a)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" podUID="0f496ee1-ca51-427f-a51d-4fc214c7f50a" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.559376 4760 scope.go:117] "RemoveContainer" containerID="6e0de700541ea6121dc2ad91e71739be35033d9324a1f873bcee0cb6e1cd336b" Nov 25 09:13:06 crc kubenswrapper[4760]: E1125 09:13:06.559873 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-wvv98_openstack-operators(65361481-df4d-4010-a478-91fd2c50d9e6)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" podUID="65361481-df4d-4010-a478-91fd2c50d9e6" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.561797 4760 scope.go:117] "RemoveContainer" containerID="3d99b6d8a7383bdefe54cd1a026e7097a65535a34199b1ee4dacfcae39e2720f" Nov 25 09:13:06 crc kubenswrapper[4760]: E1125 09:13:06.562032 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-s4q64_openstack-operators(f0f31412-34be-4b9d-8df1-b53d23abb1f6)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" podUID="f0f31412-34be-4b9d-8df1-b53d23abb1f6" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.565477 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d9d0ad6-0976-4f14-81fb-f286f6768256" containerID="e13614c5975081f9167f3d40da509ee61510733249dda0b5f5584a915a83c964" exitCode=1 Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.565536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" event={"ID":"6d9d0ad6-0976-4f14-81fb-f286f6768256","Type":"ContainerDied","Data":"e13614c5975081f9167f3d40da509ee61510733249dda0b5f5584a915a83c964"} Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.566417 4760 scope.go:117] "RemoveContainer" containerID="e13614c5975081f9167f3d40da509ee61510733249dda0b5f5584a915a83c964" Nov 25 09:13:06 crc kubenswrapper[4760]: E1125 09:13:06.566735 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-w4gcn_openstack-operators(6d9d0ad6-0976-4f14-81fb-f286f6768256)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" podUID="6d9d0ad6-0976-4f14-81fb-f286f6768256" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.605492 4760 scope.go:117] "RemoveContainer" containerID="4c25e20700be96ef60479e8ae592b3174f0611826b3ba3aefc1c35ce0702f23b" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.694071 4760 scope.go:117] "RemoveContainer" containerID="6e207003e9ae45ccb1d185f3779b5f1df6eddda369ea72f52d6ad2038552cbcf" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.836131 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 09:13:06 crc kubenswrapper[4760]: I1125 09:13:06.993410 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2fd156f5-79f2-475c-8a3d-3c9a6c7890b9" Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.709847 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-c8gdx" Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.830158 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="bd20932f-cb28-4343-98df-425123f7c87f" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.830242 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.831076 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.831128 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bd20932f-cb28-4343-98df-425123f7c87f" containerName="kube-state-metrics" containerID="cri-o://72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b" gracePeriod=30 Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.854817 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.855672 4760 scope.go:117] "RemoveContainer" containerID="29f29daba05aae1427f522161eedad14a829ee85ddd5a85bfbb28d962e5d59df" Nov 25 09:13:08 crc kubenswrapper[4760]: E1125 09:13:08.855927 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-wmmn4_openstack-operators(c43ab37e-375d-4000-8313-9ea135250641)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" podUID="c43ab37e-375d-4000-8313-9ea135250641" Nov 25 09:13:08 crc kubenswrapper[4760]: I1125 09:13:08.958284 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.502364 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7759656c4c-n49xc" Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.598128 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd20932f-cb28-4343-98df-425123f7c87f" containerID="72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b" exitCode=2 Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.598159 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd20932f-cb28-4343-98df-425123f7c87f" containerID="13928a6c1bc89f529a50acb7416dbcba00b4e1a0c0d9c793eff29a52ce9154d7" exitCode=1 Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.598181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd20932f-cb28-4343-98df-425123f7c87f","Type":"ContainerDied","Data":"72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b"} Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.598219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd20932f-cb28-4343-98df-425123f7c87f","Type":"ContainerDied","Data":"13928a6c1bc89f529a50acb7416dbcba00b4e1a0c0d9c793eff29a52ce9154d7"} Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.598237 4760 scope.go:117] "RemoveContainer" containerID="72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b" Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.598874 4760 scope.go:117] "RemoveContainer" containerID="13928a6c1bc89f529a50acb7416dbcba00b4e1a0c0d9c793eff29a52ce9154d7" Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.646421 4760 scope.go:117] "RemoveContainer" containerID="72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b" Nov 25 09:13:09 crc kubenswrapper[4760]: E1125 09:13:09.646849 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b\": container with ID starting with 72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b not found: ID does not exist" containerID="72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b" Nov 25 09:13:09 crc kubenswrapper[4760]: I1125 09:13:09.646881 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b"} err="failed to get container status \"72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b\": rpc error: code = NotFound desc = could not find container \"72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b\": container with ID starting with 72fce26361d7e0df847086a617289c42216dd4f30d4b8d3ad408493e4732023b not found: ID does not exist" Nov 25 09:13:10 crc kubenswrapper[4760]: I1125 09:13:10.608970 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd20932f-cb28-4343-98df-425123f7c87f" containerID="3a1fffa497d07ac9b3589a1296e0db0bdffbd8f4fe9abd09127981892028f477" exitCode=1 Nov 25 09:13:10 crc kubenswrapper[4760]: I1125 09:13:10.609053 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd20932f-cb28-4343-98df-425123f7c87f","Type":"ContainerDied","Data":"3a1fffa497d07ac9b3589a1296e0db0bdffbd8f4fe9abd09127981892028f477"} Nov 25 09:13:10 crc kubenswrapper[4760]: I1125 09:13:10.609368 4760 scope.go:117] "RemoveContainer" containerID="13928a6c1bc89f529a50acb7416dbcba00b4e1a0c0d9c793eff29a52ce9154d7" Nov 25 09:13:10 crc kubenswrapper[4760]: I1125 09:13:10.610479 4760 scope.go:117] "RemoveContainer" containerID="3a1fffa497d07ac9b3589a1296e0db0bdffbd8f4fe9abd09127981892028f477" Nov 25 09:13:10 crc kubenswrapper[4760]: E1125 09:13:10.611133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(bd20932f-cb28-4343-98df-425123f7c87f)\"" pod="openstack/kube-state-metrics-0" podUID="bd20932f-cb28-4343-98df-425123f7c87f" Nov 25 09:13:10 crc kubenswrapper[4760]: I1125 09:13:10.990164 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 09:13:11 crc kubenswrapper[4760]: I1125 09:13:11.283149 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 09:13:11 crc kubenswrapper[4760]: I1125 09:13:11.338160 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mzqnz" Nov 25 09:13:11 crc kubenswrapper[4760]: I1125 09:13:11.521458 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 09:13:11 crc kubenswrapper[4760]: I1125 09:13:11.626676 4760 scope.go:117] "RemoveContainer" containerID="3a1fffa497d07ac9b3589a1296e0db0bdffbd8f4fe9abd09127981892028f477" Nov 25 09:13:11 crc kubenswrapper[4760]: E1125 09:13:11.627388 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(bd20932f-cb28-4343-98df-425123f7c87f)\"" pod="openstack/kube-state-metrics-0" podUID="bd20932f-cb28-4343-98df-425123f7c87f" Nov 25 09:13:11 crc kubenswrapper[4760]: I1125 09:13:11.838510 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 09:13:11 crc kubenswrapper[4760]: I1125 09:13:11.862548 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 09:13:11 crc kubenswrapper[4760]: I1125 09:13:11.884957 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.017485 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.039581 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.119436 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.152016 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.484391 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.751650 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cxhr9" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.822111 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zvdcb" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.937163 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dnpxg" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.950287 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.958052 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 25 09:13:12 crc kubenswrapper[4760]: I1125 09:13:12.967865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.015479 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.041374 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.100199 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.155075 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.446865 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.731541 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.738653 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gczdm" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.743477 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.760241 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.831696 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.832501 4760 scope.go:117] "RemoveContainer" containerID="b654ec4170d2055e08b7e47ff2fdbdc9e608d9eef0243f166e2b0b54705c0ffe" Nov 25 09:13:13 crc kubenswrapper[4760]: E1125 09:13:13.832814 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-76784bbdf-m7z64_metallb-system(394da4a0-f1c0-45c3-a31b-9cace1180c53)\"" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" podUID="394da4a0-f1c0-45c3-a31b-9cace1180c53" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.858244 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.858302 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.858743 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.859053 4760 scope.go:117] "RemoveContainer" containerID="fd2d0dab86af05c5c2d2aabf17e5d41e875adad32438772f152ba7f816f278c9" Nov 25 09:13:13 crc kubenswrapper[4760]: E1125 09:13:13.859331 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-hlbbf_openstack-operators(97e97ce2-b50b-478e-acb2-cbdd5232d67c)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" podUID="97e97ce2-b50b-478e-acb2-cbdd5232d67c" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.876731 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.876792 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.877608 4760 scope.go:117] "RemoveContainer" containerID="43da6dce3d2a1f1a08db7b86e53bda72c9429ed1c7d6b52f142975e6ba214b68" Nov 25 09:13:13 crc kubenswrapper[4760]: E1125 09:13:13.877862 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-k4dk2_openstack-operators(03a9ee81-2733-444d-8edc-ddb1303b5686)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" podUID="03a9ee81-2733-444d-8edc-ddb1303b5686" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.900124 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.900521 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.900903 4760 scope.go:117] "RemoveContainer" containerID="ab055c959ebc58c38c3f6418b80043684df193808b9c84c36395f001a056cc52" Nov 25 09:13:13 crc kubenswrapper[4760]: E1125 09:13:13.901134 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-xghfv_openstack-operators(f531ae0e-78ad-4d2c-951f-0d1f7d1c8129)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" podUID="f531ae0e-78ad-4d2c-951f-0d1f7d1c8129" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.952442 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.952523 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.954859 4760 scope.go:117] "RemoveContainer" containerID="902a96544371b54618f88c968eed66414cf3c63adf45daf40f7664422ce263e4" Nov 25 09:13:13 crc kubenswrapper[4760]: E1125 09:13:13.956413 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-6cjlz_openstack-operators(25f372bf-e250-492b-abb9-680b1efdbdec)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" podUID="25f372bf-e250-492b-abb9-680b1efdbdec" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.979433 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.994541 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.994614 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.994627 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.994640 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.995530 4760 scope.go:117] "RemoveContainer" containerID="447b600e223dd023afc665bbc8e69de26c4c58899c9aae857fc26a2bd42f8ad3" Nov 25 09:13:13 crc kubenswrapper[4760]: I1125 09:13:13.995663 4760 scope.go:117] "RemoveContainer" containerID="25412b31aca55ca851c51777d6bbcd451a20170fc242f9b770c57bdf4b3c140a" Nov 25 09:13:13 crc kubenswrapper[4760]: E1125 09:13:13.995790 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-l24ns_openstack-operators(b4325bd6-c276-4fbc-bc67-cf5a026c3537)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" podUID="b4325bd6-c276-4fbc-bc67-cf5a026c3537" Nov 25 09:13:13 crc kubenswrapper[4760]: E1125 09:13:13.995987 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-l28cr_openstack-operators(890067e5-2be8-4699-8d90-f2771ef453e5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" podUID="890067e5-2be8-4699-8d90-f2771ef453e5" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.006004 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.060005 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.060567 4760 scope.go:117] "RemoveContainer" containerID="27635df616130284a15eb94a27e9ebd98473f3d62df0e337b90391afbbd16971" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.074485 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.074580 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.076109 4760 scope.go:117] "RemoveContainer" containerID="0332ad36cd62f0151d1d92f9e0ecff9e2b50385a38068b6f6a37c73f897293eb" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.076515 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-x7r44_openstack-operators(6dde35ac-ff01-4e46-9eae-234e6abc37dc)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" podUID="6dde35ac-ff01-4e46-9eae-234e6abc37dc" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.086293 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bf2wt" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.156165 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.197170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.240870 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.263778 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.286193 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.305579 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.326305 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.327204 4760 scope.go:117] "RemoveContainer" containerID="3d99b6d8a7383bdefe54cd1a026e7097a65535a34199b1ee4dacfcae39e2720f" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.327523 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-s4q64_openstack-operators(f0f31412-34be-4b9d-8df1-b53d23abb1f6)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" podUID="f0f31412-34be-4b9d-8df1-b53d23abb1f6" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.327827 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.419375 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.433663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.463273 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.463330 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.463807 4760 scope.go:117] "RemoveContainer" containerID="57e9dcb648b2a4ac82fa82cee59ed3cf96406bf175d172c9b7314f1d9ca12797" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.464040 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-l7cv5_openstack-operators(9291524e-d650-4366-b795-162d53bf2815)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" podUID="9291524e-d650-4366-b795-162d53bf2815" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.484462 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p67ht" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.505934 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.534607 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.535622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.536661 4760 scope.go:117] "RemoveContainer" containerID="5dc79c53c3c03ca9b6303fb264d81aff4b24ad2046efcd243289517f5eddc3da" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.580094 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.603642 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.603702 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.604593 4760 scope.go:117] "RemoveContainer" containerID="6da083de3278fe701ddf2d001a9498b330aa9702e5bc373edddd2c153cb45a79" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.604842 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-njfjf_openstack-operators(33faed21-8b19-4064-a6e2-5064ce8cbab2)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" podUID="33faed21-8b19-4064-a6e2-5064ce8cbab2" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.623888 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.653376 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.654520 4760 scope.go:117] "RemoveContainer" containerID="5a1c09aa44ace2d2787826b2246848237a891936c855509cd2aac9fd24069541" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.658170 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.660206 4760 generic.go:334] "Generic (PLEG): container finished" podID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" containerID="082a4a2e82422d9a4ca9debd013bb751e75aea940cb4a875bdda7f34501318f7" exitCode=1 Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.660324 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" event={"ID":"1d556614-e3c1-4834-919a-0c6f5f5cc4de","Type":"ContainerDied","Data":"082a4a2e82422d9a4ca9debd013bb751e75aea940cb4a875bdda7f34501318f7"} Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.660384 4760 scope.go:117] "RemoveContainer" containerID="27635df616130284a15eb94a27e9ebd98473f3d62df0e337b90391afbbd16971" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.661055 4760 scope.go:117] "RemoveContainer" containerID="ab055c959ebc58c38c3f6418b80043684df193808b9c84c36395f001a056cc52" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.661151 4760 scope.go:117] "RemoveContainer" containerID="082a4a2e82422d9a4ca9debd013bb751e75aea940cb4a875bdda7f34501318f7" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.661368 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-xghfv_openstack-operators(f531ae0e-78ad-4d2c-951f-0d1f7d1c8129)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" podUID="f531ae0e-78ad-4d2c-951f-0d1f7d1c8129" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.661503 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-kw54v_openstack-operators(1d556614-e3c1-4834-919a-0c6f5f5cc4de)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" podUID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.663238 4760 scope.go:117] "RemoveContainer" containerID="3d99b6d8a7383bdefe54cd1a026e7097a65535a34199b1ee4dacfcae39e2720f" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.663567 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-s4q64_openstack-operators(f0f31412-34be-4b9d-8df1-b53d23abb1f6)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" podUID="f0f31412-34be-4b9d-8df1-b53d23abb1f6" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.675182 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.687843 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.739315 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.740150 4760 scope.go:117] "RemoveContainer" containerID="e4243a7d630434fb3c8a541704a72bcde8912858215c199c8b8d166ba68d7290" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.762432 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.762473 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.763171 4760 scope.go:117] "RemoveContainer" containerID="6e0de700541ea6121dc2ad91e71739be35033d9324a1f873bcee0cb6e1cd336b" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.763420 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-wvv98_openstack-operators(65361481-df4d-4010-a478-91fd2c50d9e6)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" podUID="65361481-df4d-4010-a478-91fd2c50d9e6" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.802898 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.813789 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.825346 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.826648 4760 scope.go:117] "RemoveContainer" containerID="4ab285b77ae28bca1478bf4618bead2930f12450245e00f95e17ed480309a5a1" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.835120 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.846175 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.872920 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.872957 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.873691 4760 scope.go:117] "RemoveContainer" containerID="e13614c5975081f9167f3d40da509ee61510733249dda0b5f5584a915a83c964" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.873978 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-w4gcn_openstack-operators(6d9d0ad6-0976-4f14-81fb-f286f6768256)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" podUID="6d9d0ad6-0976-4f14-81fb-f286f6768256" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.882100 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tzfgb" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.896520 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.897447 4760 scope.go:117] "RemoveContainer" containerID="e9c742beaa955660cce157c3f9fb47c1e4cf20171acc817ba09189c5d70db486" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.977184 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.977260 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 09:13:14 crc kubenswrapper[4760]: I1125 09:13:14.978180 4760 scope.go:117] "RemoveContainer" containerID="b7f2d5db183f786726bbec0d16baa35f5a6aebb093333bcf767491f62c6fc104" Nov 25 09:13:14 crc kubenswrapper[4760]: E1125 09:13:14.978590 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-cr5ch_openstack-operators(0f496ee1-ca51-427f-a51d-4fc214c7f50a)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" podUID="0f496ee1-ca51-427f-a51d-4fc214c7f50a" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.455775 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.456151 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.456639 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.459087 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.460791 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6b5bm" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.460997 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.471218 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.473069 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.473380 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qvh9g" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.473541 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.473768 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-7dknh" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.484555 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.571968 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.673824 4760 generic.go:334] "Generic (PLEG): container finished" podID="002e6b13-60c5-484c-8116-b4d5241ed678" containerID="f40242a064337e6662605e340082c5eb6d57f643523c2e54bd96758fecf108ff" exitCode=1 Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.673884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" event={"ID":"002e6b13-60c5-484c-8116-b4d5241ed678","Type":"ContainerDied","Data":"f40242a064337e6662605e340082c5eb6d57f643523c2e54bd96758fecf108ff"} Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.673915 4760 scope.go:117] "RemoveContainer" containerID="5a1c09aa44ace2d2787826b2246848237a891936c855509cd2aac9fd24069541" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.675305 4760 scope.go:117] "RemoveContainer" containerID="f40242a064337e6662605e340082c5eb6d57f643523c2e54bd96758fecf108ff" Nov 25 09:13:15 crc kubenswrapper[4760]: E1125 09:13:15.676109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-54bpm_openstack-operators(002e6b13-60c5-484c-8116-b4d5241ed678)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.680435 4760 generic.go:334] "Generic (PLEG): container finished" podID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" containerID="eedf461c9950c6e80650ed140cc368daaf1b253329b46541f04d6248ffab463b" exitCode=1 Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.680626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" event={"ID":"23471a89-c4fb-4e45-b7bb-2664e4ea99f3","Type":"ContainerDied","Data":"eedf461c9950c6e80650ed140cc368daaf1b253329b46541f04d6248ffab463b"} Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.681359 4760 scope.go:117] "RemoveContainer" containerID="eedf461c9950c6e80650ed140cc368daaf1b253329b46541f04d6248ffab463b" Nov 25 09:13:15 crc kubenswrapper[4760]: E1125 09:13:15.681658 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-j5fsj_openstack-operators(23471a89-c4fb-4e45-b7bb-2664e4ea99f3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" podUID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.688623 4760 generic.go:334] "Generic (PLEG): container finished" podID="cef58941-ae6b-4624-af41-65ab598838eb" containerID="cc6fc6e89fd3b58fe7d6daada9d30327611aa75bb7fd160d58faf5034b264de3" exitCode=1 Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.688695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" event={"ID":"cef58941-ae6b-4624-af41-65ab598838eb","Type":"ContainerDied","Data":"cc6fc6e89fd3b58fe7d6daada9d30327611aa75bb7fd160d58faf5034b264de3"} Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.689334 4760 scope.go:117] "RemoveContainer" containerID="cc6fc6e89fd3b58fe7d6daada9d30327611aa75bb7fd160d58faf5034b264de3" Nov 25 09:13:15 crc kubenswrapper[4760]: E1125 09:13:15.689571 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-plxrr_openstack-operators(cef58941-ae6b-4624-af41-65ab598838eb)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.697165 4760 generic.go:334] "Generic (PLEG): container finished" podID="8aea8bb6-720b-412a-acfc-f62366da5de5" containerID="d612bc8fcf1171d535f0d49d3d30acc3064d07961df66aec229a2ec787a7b925" exitCode=1 Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.697277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" event={"ID":"8aea8bb6-720b-412a-acfc-f62366da5de5","Type":"ContainerDied","Data":"d612bc8fcf1171d535f0d49d3d30acc3064d07961df66aec229a2ec787a7b925"} Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.698564 4760 scope.go:117] "RemoveContainer" containerID="d612bc8fcf1171d535f0d49d3d30acc3064d07961df66aec229a2ec787a7b925" Nov 25 09:13:15 crc kubenswrapper[4760]: E1125 09:13:15.699122 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pmw6n_openstack-operators(8aea8bb6-720b-412a-acfc-f62366da5de5)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.703286 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" containerID="ec8fcfe1b7098acac68b331c547de888964f923471cab1ed7dc0460ce24b22bb" exitCode=1 Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.703331 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" event={"ID":"4e773e83-c06c-47e9-8a34-ef72472e3ae8","Type":"ContainerDied","Data":"ec8fcfe1b7098acac68b331c547de888964f923471cab1ed7dc0460ce24b22bb"} Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.704055 4760 scope.go:117] "RemoveContainer" containerID="ec8fcfe1b7098acac68b331c547de888964f923471cab1ed7dc0460ce24b22bb" Nov 25 09:13:15 crc kubenswrapper[4760]: E1125 09:13:15.704384 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-cxjcf_openstack-operators(4e773e83-c06c-47e9-8a34-ef72472e3ae8)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.742149 4760 scope.go:117] "RemoveContainer" containerID="e4243a7d630434fb3c8a541704a72bcde8912858215c199c8b8d166ba68d7290" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.764572 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.809839 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.829178 4760 scope.go:117] "RemoveContainer" containerID="e9c742beaa955660cce157c3f9fb47c1e4cf20171acc817ba09189c5d70db486" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.868017 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.878283 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.893402 4760 scope.go:117] "RemoveContainer" containerID="4ab285b77ae28bca1478bf4618bead2930f12450245e00f95e17ed480309a5a1" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.913045 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.928681 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.938810 4760 scope.go:117] "RemoveContainer" containerID="6c7abdb8700480307bf1f4b846e81df8ab467d7fc1fe2e48daeb5295bfe5c724" Nov 25 09:13:15 crc kubenswrapper[4760]: I1125 09:13:15.959904 4760 scope.go:117] "RemoveContainer" containerID="5dc79c53c3c03ca9b6303fb264d81aff4b24ad2046efcd243289517f5eddc3da" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.010495 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.027689 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.063655 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.065790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.096357 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.104855 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.157024 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.176454 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.216094 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.227470 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.289725 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-n6sq6" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.297080 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.297160 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.330419 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.335960 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.358786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.383585 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.385365 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.435893 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.448390 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.448611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.450522 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.506891 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.638781 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.673393 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.722055 4760 generic.go:334] "Generic (PLEG): container finished" podID="a9a9b42e-4d3b-495e-804e-af02af05581d" containerID="858e80bbba0ee7669e5a94868721d87be651860890cde3c30c116321cd1559e0" exitCode=1 Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.722146 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" event={"ID":"a9a9b42e-4d3b-495e-804e-af02af05581d","Type":"ContainerDied","Data":"858e80bbba0ee7669e5a94868721d87be651860890cde3c30c116321cd1559e0"} Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.722431 4760 scope.go:117] "RemoveContainer" containerID="6c7abdb8700480307bf1f4b846e81df8ab467d7fc1fe2e48daeb5295bfe5c724" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.723264 4760 scope.go:117] "RemoveContainer" containerID="858e80bbba0ee7669e5a94868721d87be651860890cde3c30c116321cd1559e0" Nov 25 09:13:16 crc kubenswrapper[4760]: E1125 09:13:16.723622 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-5crqc_openstack-operators(a9a9b42e-4d3b-495e-804e-af02af05581d)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" podUID="a9a9b42e-4d3b-495e-804e-af02af05581d" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.731432 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.739799 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.746373 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.770655 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.790034 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.827985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.846697 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.926108 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.949676 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.950093 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.951832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.969277 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 09:13:16 crc kubenswrapper[4760]: I1125 09:13:16.998002 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.027780 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.086525 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x9fjd" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.120695 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vsxvz" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.170825 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.192547 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.200773 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.287896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.289035 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9dpz6" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.305744 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.311685 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6tpnz" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.316881 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9fqjq" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.319190 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.319265 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.341910 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n2mfk" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.356950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.377132 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.381065 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mgpb7" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.409044 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.448205 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.462436 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.463853 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.497408 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cbzzc" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.577042 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.633002 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.657008 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.695114 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.696749 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.708369 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.709582 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.727163 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.767015 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.868390 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.911543 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.916746 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.935723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.936659 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5dbjm" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.958823 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.979166 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.986038 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.987371 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 09:13:17 crc kubenswrapper[4760]: I1125 09:13:17.997913 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.029963 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wgbnv" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.079099 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.088557 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.090378 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.114741 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.124302 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-h2n7f" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.132841 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.133094 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.143102 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.171352 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.171711 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.181907 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.216411 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tcbdm" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.239198 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.239211 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cspft" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.279170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.362880 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.372883 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.373959 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7bdqq" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.373987 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.376544 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.417267 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.507186 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.528014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.556592 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.558881 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.605723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.608537 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.644064 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.660492 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.681772 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.720020 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.739031 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.747635 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.749510 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.763747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.786792 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w7hrm" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.824928 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.824989 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.826061 4760 scope.go:117] "RemoveContainer" containerID="3a1fffa497d07ac9b3589a1296e0db0bdffbd8f4fe9abd09127981892028f477" Nov 25 09:13:18 crc kubenswrapper[4760]: E1125 09:13:18.826498 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(bd20932f-cb28-4343-98df-425123f7c87f)\"" pod="openstack/kube-state-metrics-0" podUID="bd20932f-cb28-4343-98df-425123f7c87f" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.827472 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.854916 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.855898 4760 scope.go:117] "RemoveContainer" containerID="29f29daba05aae1427f522161eedad14a829ee85ddd5a85bfbb28d962e5d59df" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.875426 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.898360 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.906201 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.907396 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.907734 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.941092 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.953586 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 09:13:18 crc kubenswrapper[4760]: I1125 09:13:18.980181 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.008004 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.037658 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.065078 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.146828 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.181227 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8566bc9698-5hw7j" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.185688 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.291091 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.347636 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.357083 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.362224 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.370507 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.391939 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.396298 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.446910 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.455284 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.456318 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.489807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x2fwx" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.512272 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.528301 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.594589 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-prz7t" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.617037 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.640043 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.680832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.748445 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.755478 4760 generic.go:334] "Generic (PLEG): container finished" podID="c43ab37e-375d-4000-8313-9ea135250641" containerID="7e683bf578987f686f3a98585d1fb169c8c66a7e420d5fea78577ba81b0a5740" exitCode=1 Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.755527 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" event={"ID":"c43ab37e-375d-4000-8313-9ea135250641","Type":"ContainerDied","Data":"7e683bf578987f686f3a98585d1fb169c8c66a7e420d5fea78577ba81b0a5740"} Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.755632 4760 scope.go:117] "RemoveContainer" containerID="29f29daba05aae1427f522161eedad14a829ee85ddd5a85bfbb28d962e5d59df" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.756813 4760 scope.go:117] "RemoveContainer" containerID="7e683bf578987f686f3a98585d1fb169c8c66a7e420d5fea78577ba81b0a5740" Nov 25 09:13:19 crc kubenswrapper[4760]: E1125 09:13:19.757443 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-wmmn4_openstack-operators(c43ab37e-375d-4000-8313-9ea135250641)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" podUID="c43ab37e-375d-4000-8313-9ea135250641" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.760998 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.761061 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jkb85" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.767023 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.798242 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4sqtg" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.842367 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sbjbt" Nov 25 09:13:19 crc kubenswrapper[4760]: I1125 09:13:19.953404 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.000168 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.071661 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.078396 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rcnq4" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.118512 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b98zq" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.125371 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9v7wz" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.159503 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.180675 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.183380 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.227807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.232912 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-shlwm" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.234285 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.248757 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.267012 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.281865 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.305557 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.309238 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.342360 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gq598" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.427391 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.469695 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.517753 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.523905 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.532722 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.563679 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.585977 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.598067 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.632778 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.733452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.782994 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.784566 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.784847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.791177 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.831557 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.900832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.911852 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.937326 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 09:13:20 crc kubenswrapper[4760]: I1125 09:13:20.983974 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.056313 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.063316 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.078739 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.117436 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.125181 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.126333 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.126607 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.130076 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.146684 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.155048 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.175917 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.180384 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.186188 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.191682 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.200979 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.303807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.366116 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-85rjz" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.394107 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gzjwq" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.445774 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.471953 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.528598 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.538301 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.633828 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.637652 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.681018 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.735452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-m8sxg" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.774527 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.785424 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.824119 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.858839 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.858989 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.859798 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.871345 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.881447 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jr5wk" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.904604 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.938590 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zj4q5" Nov 25 09:13:21 crc kubenswrapper[4760]: I1125 09:13:21.989822 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.007563 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.217051 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.225089 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.285339 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.369385 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.407337 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.434874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.435807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.442607 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c7bkp" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.480444 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.483508 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.487977 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hc25b" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.490609 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.514560 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.522268 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.543987 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.550304 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.664047 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ljtn8" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.672119 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vkrgb" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.759238 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.760049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.780919 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.789383 4760 generic.go:334] "Generic (PLEG): container finished" podID="7498b2f4-5621-4e4d-8d34-d8fc09271dcf" containerID="5e98f5db2c010e73c48cd3ce193e1de3189ac902b6a21c77c3adaf2f92798112" exitCode=1 Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.789431 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" event={"ID":"7498b2f4-5621-4e4d-8d34-d8fc09271dcf","Type":"ContainerDied","Data":"5e98f5db2c010e73c48cd3ce193e1de3189ac902b6a21c77c3adaf2f92798112"} Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.790224 4760 scope.go:117] "RemoveContainer" containerID="5e98f5db2c010e73c48cd3ce193e1de3189ac902b6a21c77c3adaf2f92798112" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.792465 4760 generic.go:334] "Generic (PLEG): container finished" podID="a6f5c6ad-5f4b-442a-9041-7f053349a0e7" containerID="63a7580f99bac9edc09f3fd12a28a54be7e71711be652baa1ddeee4a9635c6ac" exitCode=1 Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.792501 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-86mq8" event={"ID":"a6f5c6ad-5f4b-442a-9041-7f053349a0e7","Type":"ContainerDied","Data":"63a7580f99bac9edc09f3fd12a28a54be7e71711be652baa1ddeee4a9635c6ac"} Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.792962 4760 scope.go:117] "RemoveContainer" containerID="63a7580f99bac9edc09f3fd12a28a54be7e71711be652baa1ddeee4a9635c6ac" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.803668 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.875882 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.915192 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.953390 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.957000 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.960720 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 09:13:22 crc kubenswrapper[4760]: I1125 09:13:22.974818 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.077586 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.089876 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.089942 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.094498 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.102961 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.109669 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.109652757 podStartE2EDuration="22.109652757s" podCreationTimestamp="2025-11-25 09:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:13:23.107778554 +0000 UTC m=+3736.816809359" watchObservedRunningTime="2025-11-25 09:13:23.109652757 +0000 UTC m=+3736.818683552" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.176436 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mm2d8" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.183990 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.282179 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.287285 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.298992 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.302518 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.345431 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.365052 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.370751 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.392734 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.430219 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.430376 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.492068 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.509712 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.509985 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d7fa2bbb4c070621a30840b407b5585b9527b02f41c32e3a016f270b1e8850e7" gracePeriod=5 Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.511132 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.517758 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.518311 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.518629 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.527236 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ngxbf" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.535058 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-92tdz" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.564618 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.596581 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.615867 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.652183 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.660591 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.674301 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.698051 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.707546 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.802353 4760 generic.go:334] "Generic (PLEG): container finished" podID="7498b2f4-5621-4e4d-8d34-d8fc09271dcf" containerID="11bab81d3c9c60598c3c9002c3968961e4f89dbf28faa81c09e9663a4f3c9aed" exitCode=1 Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.802444 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" event={"ID":"7498b2f4-5621-4e4d-8d34-d8fc09271dcf","Type":"ContainerDied","Data":"11bab81d3c9c60598c3c9002c3968961e4f89dbf28faa81c09e9663a4f3c9aed"} Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.802519 4760 scope.go:117] "RemoveContainer" containerID="5e98f5db2c010e73c48cd3ce193e1de3189ac902b6a21c77c3adaf2f92798112" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.803363 4760 scope.go:117] "RemoveContainer" containerID="11bab81d3c9c60598c3c9002c3968961e4f89dbf28faa81c09e9663a4f3c9aed" Nov 25 09:13:23 crc kubenswrapper[4760]: E1125 09:13:23.803773 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-7f985d654d-m6mjj_cert-manager(7498b2f4-5621-4e4d-8d34-d8fc09271dcf)\"" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" podUID="7498b2f4-5621-4e4d-8d34-d8fc09271dcf" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.806690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-86mq8" event={"ID":"a6f5c6ad-5f4b-442a-9041-7f053349a0e7","Type":"ContainerStarted","Data":"d5f6a23032af78a0d0d76dd8741bc2dfead397be52894cc704218ca89d62ec7c"} Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.845159 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.846967 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.854500 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 09:13:23 crc kubenswrapper[4760]: I1125 09:13:23.921371 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.030789 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.060063 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.061045 4760 scope.go:117] "RemoveContainer" containerID="082a4a2e82422d9a4ca9debd013bb751e75aea940cb4a875bdda7f34501318f7" Nov 25 09:13:24 crc kubenswrapper[4760]: E1125 09:13:24.061323 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-kw54v_openstack-operators(1d556614-e3c1-4834-919a-0c6f5f5cc4de)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" podUID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.073324 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.159446 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.217277 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.246906 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.299862 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.306779 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.327527 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.372189 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.486293 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.511780 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.512321 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.534619 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.535608 4760 scope.go:117] "RemoveContainer" containerID="ec8fcfe1b7098acac68b331c547de888964f923471cab1ed7dc0460ce24b22bb" Nov 25 09:13:24 crc kubenswrapper[4760]: E1125 09:13:24.535991 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-cxjcf_openstack-operators(4e773e83-c06c-47e9-8a34-ef72472e3ae8)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.653615 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.654349 4760 scope.go:117] "RemoveContainer" containerID="f40242a064337e6662605e340082c5eb6d57f643523c2e54bd96758fecf108ff" Nov 25 09:13:24 crc kubenswrapper[4760]: E1125 09:13:24.654655 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-54bpm_openstack-operators(002e6b13-60c5-484c-8116-b4d5241ed678)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.659820 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.686846 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.687442 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.740319 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.741161 4760 scope.go:117] "RemoveContainer" containerID="eedf461c9950c6e80650ed140cc368daaf1b253329b46541f04d6248ffab463b" Nov 25 09:13:24 crc kubenswrapper[4760]: E1125 09:13:24.741526 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-j5fsj_openstack-operators(23471a89-c4fb-4e45-b7bb-2664e4ea99f3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" podUID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.825802 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.826569 4760 scope.go:117] "RemoveContainer" containerID="d612bc8fcf1171d535f0d49d3d30acc3064d07961df66aec229a2ec787a7b925" Nov 25 09:13:24 crc kubenswrapper[4760]: E1125 09:13:24.826805 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pmw6n_openstack-operators(8aea8bb6-720b-412a-acfc-f62366da5de5)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.847078 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.859062 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.886974 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.896799 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.897925 4760 scope.go:117] "RemoveContainer" containerID="cc6fc6e89fd3b58fe7d6daada9d30327611aa75bb7fd160d58faf5034b264de3" Nov 25 09:13:24 crc kubenswrapper[4760]: E1125 09:13:24.898302 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-plxrr_openstack-operators(cef58941-ae6b-4624-af41-65ab598838eb)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.899353 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.928245 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.938684 4760 scope.go:117] "RemoveContainer" containerID="447b600e223dd023afc665bbc8e69de26c4c58899c9aae857fc26a2bd42f8ad3" Nov 25 09:13:24 crc kubenswrapper[4760]: I1125 09:13:24.995359 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.015407 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.022124 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.072288 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.146079 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mhr6s" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.147943 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.187771 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.245071 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-t29h5" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.357648 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-fdpj8" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.408998 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.409179 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.427054 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.444133 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bwhbm" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.448538 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.591763 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.595779 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.607341 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.648035 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jdhsk" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.648139 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.653427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.673862 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.831049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" event={"ID":"b4325bd6-c276-4fbc-bc67-cf5a026c3537","Type":"ContainerStarted","Data":"32841b987e4468c3d1e947ed61c045ad805714af7f8e65e0bc9d465651ade0d4"} Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.832765 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.846120 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.890600 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.913599 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.921700 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.939196 4760 scope.go:117] "RemoveContainer" containerID="e13614c5975081f9167f3d40da509ee61510733249dda0b5f5584a915a83c964" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.948308 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 09:13:25 crc kubenswrapper[4760]: I1125 09:13:25.973763 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.077277 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jpfdb" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.084928 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-78cf4" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.201733 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.279825 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.307088 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.321606 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.333223 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.367144 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.376403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.674541 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.687422 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-p8p99" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.780759 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.799719 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.855108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" event={"ID":"6d9d0ad6-0976-4f14-81fb-f286f6768256","Type":"ContainerStarted","Data":"33a01e40481848f0233b8baf6e111721004ad87758430d25bd277cb675ab0474"} Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.855877 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.938041 4760 scope.go:117] "RemoveContainer" containerID="57e9dcb648b2a4ac82fa82cee59ed3cf96406bf175d172c9b7314f1d9ca12797" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.949389 4760 scope.go:117] "RemoveContainer" containerID="25412b31aca55ca851c51777d6bbcd451a20170fc242f9b770c57bdf4b3c140a" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.951435 4760 scope.go:117] "RemoveContainer" containerID="fd2d0dab86af05c5c2d2aabf17e5d41e875adad32438772f152ba7f816f278c9" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.954354 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 09:13:26 crc kubenswrapper[4760]: I1125 09:13:26.983167 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.027175 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.074965 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.177311 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.204853 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.205118 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.225346 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.244060 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.365624 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.411325 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.637671 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.641201 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.802139 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.866595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" event={"ID":"97e97ce2-b50b-478e-acb2-cbdd5232d67c","Type":"ContainerStarted","Data":"10b4a5e4bb79ad6c33ead8ab8a36cdf4ba9a6aca0b818c05f316fef22f2893c9"} Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.867997 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.871630 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" event={"ID":"9291524e-d650-4366-b795-162d53bf2815","Type":"ContainerStarted","Data":"5b89e1e82edcacfa3564082e2409d6a6387b4bc4073e5900c9e724a6771c3bfb"} Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.871902 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.874458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" event={"ID":"890067e5-2be8-4699-8d90-f2771ef453e5","Type":"ContainerStarted","Data":"e2d95bcf8701fbbc5755f2f67fbad668d59b6370ccdadc0fc2d0b26178933994"} Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.875039 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.931309 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.938699 4760 scope.go:117] "RemoveContainer" containerID="6e0de700541ea6121dc2ad91e71739be35033d9324a1f873bcee0cb6e1cd336b" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.939066 4760 scope.go:117] "RemoveContainer" containerID="902a96544371b54618f88c968eed66414cf3c63adf45daf40f7664422ce263e4" Nov 25 09:13:27 crc kubenswrapper[4760]: I1125 09:13:27.939550 4760 scope.go:117] "RemoveContainer" containerID="858e80bbba0ee7669e5a94868721d87be651860890cde3c30c116321cd1559e0" Nov 25 09:13:27 crc kubenswrapper[4760]: E1125 09:13:27.939781 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-5crqc_openstack-operators(a9a9b42e-4d3b-495e-804e-af02af05581d)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" podUID="a9a9b42e-4d3b-495e-804e-af02af05581d" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.076506 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.334049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.518150 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.656763 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.855298 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.856015 4760 scope.go:117] "RemoveContainer" containerID="7e683bf578987f686f3a98585d1fb169c8c66a7e420d5fea78577ba81b0a5740" Nov 25 09:13:28 crc kubenswrapper[4760]: E1125 09:13:28.856379 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-wmmn4_openstack-operators(c43ab37e-375d-4000-8313-9ea135250641)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" podUID="c43ab37e-375d-4000-8313-9ea135250641" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.886266 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" event={"ID":"25f372bf-e250-492b-abb9-680b1efdbdec","Type":"ContainerStarted","Data":"d6264d8c1b3580d6bc79feeea5be8f8b76fdad0b2bf798285a762ddd2c2508f3"} Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.886528 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.888659 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" event={"ID":"65361481-df4d-4010-a478-91fd2c50d9e6","Type":"ContainerStarted","Data":"c10020d1df998efbd123470508f276703a84a61a9a32fbb461af3bb63d91bb96"} Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.888846 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.893832 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.893881 4760 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d7fa2bbb4c070621a30840b407b5585b9527b02f41c32e3a016f270b1e8850e7" exitCode=137 Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.938779 4760 scope.go:117] "RemoveContainer" containerID="3d99b6d8a7383bdefe54cd1a026e7097a65535a34199b1ee4dacfcae39e2720f" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.938837 4760 scope.go:117] "RemoveContainer" containerID="b7f2d5db183f786726bbec0d16baa35f5a6aebb093333bcf767491f62c6fc104" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.939046 4760 scope.go:117] "RemoveContainer" containerID="43da6dce3d2a1f1a08db7b86e53bda72c9429ed1c7d6b52f142975e6ba214b68" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.939195 4760 scope.go:117] "RemoveContainer" containerID="0332ad36cd62f0151d1d92f9e0ecff9e2b50385a38068b6f6a37c73f897293eb" Nov 25 09:13:28 crc kubenswrapper[4760]: I1125 09:13:28.939564 4760 scope.go:117] "RemoveContainer" containerID="b654ec4170d2055e08b7e47ff2fdbdc9e608d9eef0243f166e2b0b54705c0ffe" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.169556 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.170003 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.310900 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348092 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348204 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348227 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348274 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348312 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348875 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.348898 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.349704 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.359137 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.451087 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.451125 4760 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.451134 4760 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.451142 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.451150 4760 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.906743 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" event={"ID":"394da4a0-f1c0-45c3-a31b-9cace1180c53","Type":"ContainerStarted","Data":"6e7a6a48ac7ac5c40ead3f9380de0d83e43e614735bf6890b30ea8681f0fc73f"} Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.907483 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.910424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" event={"ID":"0f496ee1-ca51-427f-a51d-4fc214c7f50a","Type":"ContainerStarted","Data":"2ebea4243243b03b54b5c03b61137ba6747a6af19ea5306b6853c1cca33dbda0"} Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.911059 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.914401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" event={"ID":"6dde35ac-ff01-4e46-9eae-234e6abc37dc","Type":"ContainerStarted","Data":"bd684d32248be7def82c3dd63649176a3fd8cc3356537b9428f5648469927357"} Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.914837 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.917472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" event={"ID":"f0f31412-34be-4b9d-8df1-b53d23abb1f6","Type":"ContainerStarted","Data":"a2b75496aed29f4ab6a23e10632a8dab543108a180c4cca27aa86fc4bd73edf3"} Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.917871 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.923303 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.923464 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.924124 4760 scope.go:117] "RemoveContainer" containerID="d7fa2bbb4c070621a30840b407b5585b9527b02f41c32e3a016f270b1e8850e7" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.929395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" event={"ID":"03a9ee81-2733-444d-8edc-ddb1303b5686","Type":"ContainerStarted","Data":"66188e2b03e9a2e7d535643bd92be60e66e122a068e5eba0c516b7b518863721"} Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.929980 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.940401 4760 scope.go:117] "RemoveContainer" containerID="6da083de3278fe701ddf2d001a9498b330aa9702e5bc373edddd2c153cb45a79" Nov 25 09:13:29 crc kubenswrapper[4760]: I1125 09:13:29.940946 4760 scope.go:117] "RemoveContainer" containerID="ab055c959ebc58c38c3f6418b80043684df193808b9c84c36395f001a056cc52" Nov 25 09:13:30 crc kubenswrapper[4760]: I1125 09:13:30.949539 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 09:13:30 crc kubenswrapper[4760]: I1125 09:13:30.949998 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" event={"ID":"33faed21-8b19-4064-a6e2-5064ce8cbab2","Type":"ContainerStarted","Data":"8d4f4488170590b621eb05bb0f399daaebcdebb8b239de3d3784d25e94ab9c37"} Nov 25 09:13:30 crc kubenswrapper[4760]: I1125 09:13:30.950024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" event={"ID":"f531ae0e-78ad-4d2c-951f-0d1f7d1c8129","Type":"ContainerStarted","Data":"991cd79cd5723f53d1aa6cc5557131a3e1e2c5e27718867979ab2c22ea11da35"} Nov 25 09:13:30 crc kubenswrapper[4760]: I1125 09:13:30.950490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 09:13:30 crc kubenswrapper[4760]: I1125 09:13:30.950555 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 09:13:32 crc kubenswrapper[4760]: I1125 09:13:32.939138 4760 scope.go:117] "RemoveContainer" containerID="3a1fffa497d07ac9b3589a1296e0db0bdffbd8f4fe9abd09127981892028f477" Nov 25 09:13:33 crc kubenswrapper[4760]: I1125 09:13:33.860847 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-hlbbf" Nov 25 09:13:33 crc kubenswrapper[4760]: I1125 09:13:33.954811 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-6cjlz" Nov 25 09:13:33 crc kubenswrapper[4760]: I1125 09:13:33.973259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd20932f-cb28-4343-98df-425123f7c87f","Type":"ContainerStarted","Data":"ae24ccde22c59733010c779e5c8f6087fdd2fa8f64d7582d46d811a6104d9d10"} Nov 25 09:13:33 crc kubenswrapper[4760]: I1125 09:13:33.973658 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.001288 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-l28cr" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.002258 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l24ns" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.060039 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.060886 4760 scope.go:117] "RemoveContainer" containerID="082a4a2e82422d9a4ca9debd013bb751e75aea940cb4a875bdda7f34501318f7" Nov 25 09:13:34 crc kubenswrapper[4760]: E1125 09:13:34.061438 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-kw54v_openstack-operators(1d556614-e3c1-4834-919a-0c6f5f5cc4de)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" podUID="1d556614-e3c1-4834-919a-0c6f5f5cc4de" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.070521 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-x7r44" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.329480 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-s4q64" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.465498 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l7cv5" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.535426 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.536177 4760 scope.go:117] "RemoveContainer" containerID="ec8fcfe1b7098acac68b331c547de888964f923471cab1ed7dc0460ce24b22bb" Nov 25 09:13:34 crc kubenswrapper[4760]: E1125 09:13:34.536629 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-cxjcf_openstack-operators(4e773e83-c06c-47e9-8a34-ef72472e3ae8)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" podUID="4e773e83-c06c-47e9-8a34-ef72472e3ae8" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.653973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.654858 4760 scope.go:117] "RemoveContainer" containerID="f40242a064337e6662605e340082c5eb6d57f643523c2e54bd96758fecf108ff" Nov 25 09:13:34 crc kubenswrapper[4760]: E1125 09:13:34.655354 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-54bpm_openstack-operators(002e6b13-60c5-484c-8116-b4d5241ed678)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" podUID="002e6b13-60c5-484c-8116-b4d5241ed678" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.739851 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.740683 4760 scope.go:117] "RemoveContainer" containerID="eedf461c9950c6e80650ed140cc368daaf1b253329b46541f04d6248ffab463b" Nov 25 09:13:34 crc kubenswrapper[4760]: E1125 09:13:34.741002 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-j5fsj_openstack-operators(23471a89-c4fb-4e45-b7bb-2664e4ea99f3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" podUID="23471a89-c4fb-4e45-b7bb-2664e4ea99f3" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.765754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wvv98" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.825618 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.826386 4760 scope.go:117] "RemoveContainer" containerID="d612bc8fcf1171d535f0d49d3d30acc3064d07961df66aec229a2ec787a7b925" Nov 25 09:13:34 crc kubenswrapper[4760]: E1125 09:13:34.826670 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pmw6n_openstack-operators(8aea8bb6-720b-412a-acfc-f62366da5de5)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" podUID="8aea8bb6-720b-412a-acfc-f62366da5de5" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.879009 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-w4gcn" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.896320 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.898500 4760 scope.go:117] "RemoveContainer" containerID="cc6fc6e89fd3b58fe7d6daada9d30327611aa75bb7fd160d58faf5034b264de3" Nov 25 09:13:34 crc kubenswrapper[4760]: E1125 09:13:34.898980 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-plxrr_openstack-operators(cef58941-ae6b-4624-af41-65ab598838eb)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" Nov 25 09:13:34 crc kubenswrapper[4760]: I1125 09:13:34.979009 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-cr5ch" Nov 25 09:13:35 crc kubenswrapper[4760]: I1125 09:13:35.938504 4760 scope.go:117] "RemoveContainer" containerID="11bab81d3c9c60598c3c9002c3968961e4f89dbf28faa81c09e9663a4f3c9aed" Nov 25 09:13:37 crc kubenswrapper[4760]: I1125 09:13:37.004071 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-m6mjj" event={"ID":"7498b2f4-5621-4e4d-8d34-d8fc09271dcf","Type":"ContainerStarted","Data":"6ed7e1bac9b63b7d8c1bc033afd54ee0d74d3c7eec458f3fe9a11e49aeccf58d"} Nov 25 09:13:38 crc kubenswrapper[4760]: I1125 09:13:38.837372 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 09:13:38 crc kubenswrapper[4760]: I1125 09:13:38.855964 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 09:13:38 crc kubenswrapper[4760]: I1125 09:13:38.856802 4760 scope.go:117] "RemoveContainer" containerID="7e683bf578987f686f3a98585d1fb169c8c66a7e420d5fea78577ba81b0a5740" Nov 25 09:13:38 crc kubenswrapper[4760]: E1125 09:13:38.857073 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-wmmn4_openstack-operators(c43ab37e-375d-4000-8313-9ea135250641)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" podUID="c43ab37e-375d-4000-8313-9ea135250641" Nov 25 09:13:38 crc kubenswrapper[4760]: I1125 09:13:38.938879 4760 scope.go:117] "RemoveContainer" containerID="858e80bbba0ee7669e5a94868721d87be651860890cde3c30c116321cd1559e0" Nov 25 09:13:40 crc kubenswrapper[4760]: I1125 09:13:40.033455 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5crqc" event={"ID":"a9a9b42e-4d3b-495e-804e-af02af05581d","Type":"ContainerStarted","Data":"12baca2126d6283736a0ce0ffdedaeb53f22a645452934c51efe35996f7a4cdd"} Nov 25 09:13:43 crc kubenswrapper[4760]: I1125 09:13:43.878495 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-k4dk2" Nov 25 09:13:43 crc kubenswrapper[4760]: I1125 09:13:43.903558 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-xghfv" Nov 25 09:13:44 crc kubenswrapper[4760]: I1125 09:13:44.610280 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-njfjf" Nov 25 09:13:44 crc kubenswrapper[4760]: I1125 09:13:44.938738 4760 scope.go:117] "RemoveContainer" containerID="082a4a2e82422d9a4ca9debd013bb751e75aea940cb4a875bdda7f34501318f7" Nov 25 09:13:46 crc kubenswrapper[4760]: I1125 09:13:46.089795 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" event={"ID":"1d556614-e3c1-4834-919a-0c6f5f5cc4de","Type":"ContainerStarted","Data":"2d87c98690b542f760fdb0b11088eee378e39b728416c56fdc955595c21dcd46"} Nov 25 09:13:46 crc kubenswrapper[4760]: I1125 09:13:46.090322 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 09:13:47 crc kubenswrapper[4760]: I1125 09:13:47.938326 4760 scope.go:117] "RemoveContainer" containerID="ec8fcfe1b7098acac68b331c547de888964f923471cab1ed7dc0460ce24b22bb" Nov 25 09:13:48 crc kubenswrapper[4760]: I1125 09:13:48.938431 4760 scope.go:117] "RemoveContainer" containerID="cc6fc6e89fd3b58fe7d6daada9d30327611aa75bb7fd160d58faf5034b264de3" Nov 25 09:13:48 crc kubenswrapper[4760]: I1125 09:13:48.938757 4760 scope.go:117] "RemoveContainer" containerID="d612bc8fcf1171d535f0d49d3d30acc3064d07961df66aec229a2ec787a7b925" Nov 25 09:13:48 crc kubenswrapper[4760]: I1125 09:13:48.939548 4760 scope.go:117] "RemoveContainer" containerID="f40242a064337e6662605e340082c5eb6d57f643523c2e54bd96758fecf108ff" Nov 25 09:13:49 crc kubenswrapper[4760]: I1125 09:13:49.120127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" event={"ID":"4e773e83-c06c-47e9-8a34-ef72472e3ae8","Type":"ContainerStarted","Data":"48922489eef0e4c64bfd3cd00cdbc0593f18ee616374bfa4e4027bd319263a4d"} Nov 25 09:13:49 crc kubenswrapper[4760]: I1125 09:13:49.120654 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 09:13:49 crc kubenswrapper[4760]: I1125 09:13:49.938609 4760 scope.go:117] "RemoveContainer" containerID="eedf461c9950c6e80650ed140cc368daaf1b253329b46541f04d6248ffab463b" Nov 25 09:13:50 crc kubenswrapper[4760]: I1125 09:13:50.132838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" event={"ID":"8aea8bb6-720b-412a-acfc-f62366da5de5","Type":"ContainerStarted","Data":"2fc00dcdd00deb39b3bc36a57a4fee8303ee2e9de6d0d953cffffe8190f53b9e"} Nov 25 09:13:50 crc kubenswrapper[4760]: I1125 09:13:50.133068 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 09:13:50 crc kubenswrapper[4760]: I1125 09:13:50.136345 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" event={"ID":"002e6b13-60c5-484c-8116-b4d5241ed678","Type":"ContainerStarted","Data":"186bddcfb223c55491cbd64f849d8ad47a17b689fd93b24791d7a1b443a0abb2"} Nov 25 09:13:50 crc kubenswrapper[4760]: I1125 09:13:50.136839 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 09:13:50 crc kubenswrapper[4760]: I1125 09:13:50.139548 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" event={"ID":"cef58941-ae6b-4624-af41-65ab598838eb","Type":"ContainerStarted","Data":"34b12b15cf7aee595061aa9cff96f4680c10488d3a971d738633c50003ba1681"} Nov 25 09:13:50 crc kubenswrapper[4760]: I1125 09:13:50.139994 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 09:13:51 crc kubenswrapper[4760]: I1125 09:13:51.152470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" event={"ID":"23471a89-c4fb-4e45-b7bb-2664e4ea99f3","Type":"ContainerStarted","Data":"21509c45bb06d0b4b651a86e313526202c411a3b8ec8f62c230410ee9610ef1c"} Nov 25 09:13:51 crc kubenswrapper[4760]: I1125 09:13:51.153181 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 09:13:51 crc kubenswrapper[4760]: I1125 09:13:51.938807 4760 scope.go:117] "RemoveContainer" containerID="7e683bf578987f686f3a98585d1fb169c8c66a7e420d5fea78577ba81b0a5740" Nov 25 09:13:52 crc kubenswrapper[4760]: I1125 09:13:52.163527 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" event={"ID":"c43ab37e-375d-4000-8313-9ea135250641","Type":"ContainerStarted","Data":"9a6a654ed20997151f97ddfd42015a06cde1b2ffb7168efcde6badb971460076"} Nov 25 09:13:52 crc kubenswrapper[4760]: I1125 09:13:52.163822 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 09:13:54 crc kubenswrapper[4760]: I1125 09:13:54.063134 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-kw54v" Nov 25 09:13:54 crc kubenswrapper[4760]: I1125 09:13:54.537509 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-cxjcf" Nov 25 09:13:54 crc kubenswrapper[4760]: I1125 09:13:54.656981 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-54bpm" Nov 25 09:13:54 crc kubenswrapper[4760]: I1125 09:13:54.828755 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pmw6n" Nov 25 09:13:54 crc kubenswrapper[4760]: I1125 09:13:54.907196 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" Nov 25 09:13:58 crc kubenswrapper[4760]: I1125 09:13:58.863512 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-wmmn4" Nov 25 09:14:01 crc kubenswrapper[4760]: I1125 09:14:01.745873 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:14:01 crc kubenswrapper[4760]: I1125 09:14:01.746517 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:14:03 crc kubenswrapper[4760]: I1125 09:14:03.833831 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76784bbdf-m7z64" Nov 25 09:14:04 crc kubenswrapper[4760]: I1125 09:14:04.741936 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-j5fsj" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.586178 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p7rtg"] Nov 25 09:14:08 crc kubenswrapper[4760]: E1125 09:14:08.587299 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" containerName="installer" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.587317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" containerName="installer" Nov 25 09:14:08 crc kubenswrapper[4760]: E1125 09:14:08.587347 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.587355 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.587575 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.587608 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d019e132-fba9-43bc-80c5-01bb4ac44303" containerName="installer" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.589194 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.596880 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7rtg"] Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.643986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-catalog-content\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.644168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-utilities\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.644231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swpnz\" (UniqueName: \"kubernetes.io/projected/324736fb-a998-4759-8ad3-7653af6392c9-kube-api-access-swpnz\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.745724 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-utilities\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.746093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swpnz\" (UniqueName: \"kubernetes.io/projected/324736fb-a998-4759-8ad3-7653af6392c9-kube-api-access-swpnz\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.746234 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-utilities\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.746373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-catalog-content\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:08 crc kubenswrapper[4760]: I1125 09:14:08.746669 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-catalog-content\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:09 crc kubenswrapper[4760]: I1125 09:14:09.258702 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swpnz\" (UniqueName: \"kubernetes.io/projected/324736fb-a998-4759-8ad3-7653af6392c9-kube-api-access-swpnz\") pod \"certified-operators-p7rtg\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:09 crc kubenswrapper[4760]: I1125 09:14:09.516100 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:09 crc kubenswrapper[4760]: I1125 09:14:09.977226 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p7rtg"] Nov 25 09:14:10 crc kubenswrapper[4760]: I1125 09:14:10.326670 4760 generic.go:334] "Generic (PLEG): container finished" podID="324736fb-a998-4759-8ad3-7653af6392c9" containerID="2bf943f4f5557cdcc983d3fa5bea978a2a99a9b4de7ae6f85083c89272a9b7bc" exitCode=0 Nov 25 09:14:10 crc kubenswrapper[4760]: I1125 09:14:10.326776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7rtg" event={"ID":"324736fb-a998-4759-8ad3-7653af6392c9","Type":"ContainerDied","Data":"2bf943f4f5557cdcc983d3fa5bea978a2a99a9b4de7ae6f85083c89272a9b7bc"} Nov 25 09:14:10 crc kubenswrapper[4760]: I1125 09:14:10.326980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7rtg" event={"ID":"324736fb-a998-4759-8ad3-7653af6392c9","Type":"ContainerStarted","Data":"92750a8d6d96720d83b394d9470a96dddced0429431cf452e405273e6895df3f"} Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.181786 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pmfgv"] Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.184437 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.203855 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmfgv"] Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.314954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7rh\" (UniqueName: \"kubernetes.io/projected/8a34eebb-219b-4a44-ba93-8b8158edcbc9-kube-api-access-pp7rh\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.315052 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-catalog-content\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.315459 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-utilities\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.346239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7rtg" event={"ID":"324736fb-a998-4759-8ad3-7653af6392c9","Type":"ContainerStarted","Data":"b395dac97034d119e1ea6baecfe4c7ac8d2e5c6866703cd69806ff260292d705"} Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.417091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-utilities\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.417173 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7rh\" (UniqueName: \"kubernetes.io/projected/8a34eebb-219b-4a44-ba93-8b8158edcbc9-kube-api-access-pp7rh\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.417225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-catalog-content\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.417998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-catalog-content\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.418106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-utilities\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.436165 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7rh\" (UniqueName: \"kubernetes.io/projected/8a34eebb-219b-4a44-ba93-8b8158edcbc9-kube-api-access-pp7rh\") pod \"redhat-operators-pmfgv\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:12 crc kubenswrapper[4760]: I1125 09:14:12.521630 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:13 crc kubenswrapper[4760]: I1125 09:14:13.039968 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmfgv"] Nov 25 09:14:13 crc kubenswrapper[4760]: W1125 09:14:13.064389 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a34eebb_219b_4a44_ba93_8b8158edcbc9.slice/crio-ed5a094318dbf46db6bf1aeb36426c66f38c038f7af125a9bf80e0cc2179ae4f WatchSource:0}: Error finding container ed5a094318dbf46db6bf1aeb36426c66f38c038f7af125a9bf80e0cc2179ae4f: Status 404 returned error can't find the container with id ed5a094318dbf46db6bf1aeb36426c66f38c038f7af125a9bf80e0cc2179ae4f Nov 25 09:14:13 crc kubenswrapper[4760]: I1125 09:14:13.364521 4760 generic.go:334] "Generic (PLEG): container finished" podID="324736fb-a998-4759-8ad3-7653af6392c9" containerID="b395dac97034d119e1ea6baecfe4c7ac8d2e5c6866703cd69806ff260292d705" exitCode=0 Nov 25 09:14:13 crc kubenswrapper[4760]: I1125 09:14:13.364591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7rtg" event={"ID":"324736fb-a998-4759-8ad3-7653af6392c9","Type":"ContainerDied","Data":"b395dac97034d119e1ea6baecfe4c7ac8d2e5c6866703cd69806ff260292d705"} Nov 25 09:14:13 crc kubenswrapper[4760]: I1125 09:14:13.367562 4760 generic.go:334] "Generic (PLEG): container finished" podID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerID="2597d58f004dbf0fcc08bf68300b2609372fea62b7de6131acc048f86f21105d" exitCode=0 Nov 25 09:14:13 crc kubenswrapper[4760]: I1125 09:14:13.367619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmfgv" event={"ID":"8a34eebb-219b-4a44-ba93-8b8158edcbc9","Type":"ContainerDied","Data":"2597d58f004dbf0fcc08bf68300b2609372fea62b7de6131acc048f86f21105d"} Nov 25 09:14:13 crc kubenswrapper[4760]: I1125 09:14:13.367651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmfgv" event={"ID":"8a34eebb-219b-4a44-ba93-8b8158edcbc9","Type":"ContainerStarted","Data":"ed5a094318dbf46db6bf1aeb36426c66f38c038f7af125a9bf80e0cc2179ae4f"} Nov 25 09:14:14 crc kubenswrapper[4760]: I1125 09:14:14.379857 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7rtg" event={"ID":"324736fb-a998-4759-8ad3-7653af6392c9","Type":"ContainerStarted","Data":"75fdd77f28fe6e583d35bfa2c65c379463e16c3d11ef66651c91a1ab6a315404"} Nov 25 09:14:14 crc kubenswrapper[4760]: I1125 09:14:14.402188 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p7rtg" podStartSLOduration=2.840494235 podStartE2EDuration="6.402168385s" podCreationTimestamp="2025-11-25 09:14:08 +0000 UTC" firstStartedPulling="2025-11-25 09:14:10.328734259 +0000 UTC m=+3784.037765054" lastFinishedPulling="2025-11-25 09:14:13.890408409 +0000 UTC m=+3787.599439204" observedRunningTime="2025-11-25 09:14:14.396667298 +0000 UTC m=+3788.105698103" watchObservedRunningTime="2025-11-25 09:14:14.402168385 +0000 UTC m=+3788.111199180" Nov 25 09:14:15 crc kubenswrapper[4760]: I1125 09:14:15.391917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmfgv" event={"ID":"8a34eebb-219b-4a44-ba93-8b8158edcbc9","Type":"ContainerStarted","Data":"051124b39e972dc370da2f24984a82a0488185c0cf09acdc8b1dae5198ce7472"} Nov 25 09:14:16 crc kubenswrapper[4760]: I1125 09:14:16.407285 4760 generic.go:334] "Generic (PLEG): container finished" podID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerID="051124b39e972dc370da2f24984a82a0488185c0cf09acdc8b1dae5198ce7472" exitCode=0 Nov 25 09:14:16 crc kubenswrapper[4760]: I1125 09:14:16.407512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmfgv" event={"ID":"8a34eebb-219b-4a44-ba93-8b8158edcbc9","Type":"ContainerDied","Data":"051124b39e972dc370da2f24984a82a0488185c0cf09acdc8b1dae5198ce7472"} Nov 25 09:14:17 crc kubenswrapper[4760]: I1125 09:14:17.419825 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmfgv" event={"ID":"8a34eebb-219b-4a44-ba93-8b8158edcbc9","Type":"ContainerStarted","Data":"a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62"} Nov 25 09:14:17 crc kubenswrapper[4760]: I1125 09:14:17.446562 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pmfgv" podStartSLOduration=1.689354495 podStartE2EDuration="5.446544022s" podCreationTimestamp="2025-11-25 09:14:12 +0000 UTC" firstStartedPulling="2025-11-25 09:14:13.369151864 +0000 UTC m=+3787.078182659" lastFinishedPulling="2025-11-25 09:14:17.126341391 +0000 UTC m=+3790.835372186" observedRunningTime="2025-11-25 09:14:17.43876605 +0000 UTC m=+3791.147796855" watchObservedRunningTime="2025-11-25 09:14:17.446544022 +0000 UTC m=+3791.155574817" Nov 25 09:14:19 crc kubenswrapper[4760]: I1125 09:14:19.516632 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:19 crc kubenswrapper[4760]: I1125 09:14:19.517029 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:19 crc kubenswrapper[4760]: I1125 09:14:19.804587 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:20 crc kubenswrapper[4760]: I1125 09:14:20.493683 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:22 crc kubenswrapper[4760]: I1125 09:14:22.521902 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:22 crc kubenswrapper[4760]: I1125 09:14:22.521965 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:22 crc kubenswrapper[4760]: I1125 09:14:22.571326 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:23 crc kubenswrapper[4760]: I1125 09:14:23.527703 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.058465 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7rtg"] Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.059379 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p7rtg" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="registry-server" containerID="cri-o://75fdd77f28fe6e583d35bfa2c65c379463e16c3d11ef66651c91a1ab6a315404" gracePeriod=2 Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.521681 4760 generic.go:334] "Generic (PLEG): container finished" podID="324736fb-a998-4759-8ad3-7653af6392c9" containerID="75fdd77f28fe6e583d35bfa2c65c379463e16c3d11ef66651c91a1ab6a315404" exitCode=0 Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.521775 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7rtg" event={"ID":"324736fb-a998-4759-8ad3-7653af6392c9","Type":"ContainerDied","Data":"75fdd77f28fe6e583d35bfa2c65c379463e16c3d11ef66651c91a1ab6a315404"} Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.862113 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.962303 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-utilities\") pod \"324736fb-a998-4759-8ad3-7653af6392c9\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.962357 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swpnz\" (UniqueName: \"kubernetes.io/projected/324736fb-a998-4759-8ad3-7653af6392c9-kube-api-access-swpnz\") pod \"324736fb-a998-4759-8ad3-7653af6392c9\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.962400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-catalog-content\") pod \"324736fb-a998-4759-8ad3-7653af6392c9\" (UID: \"324736fb-a998-4759-8ad3-7653af6392c9\") " Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.969820 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-utilities" (OuterVolumeSpecName: "utilities") pod "324736fb-a998-4759-8ad3-7653af6392c9" (UID: "324736fb-a998-4759-8ad3-7653af6392c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.978648 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:14:28 crc kubenswrapper[4760]: I1125 09:14:28.985540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324736fb-a998-4759-8ad3-7653af6392c9-kube-api-access-swpnz" (OuterVolumeSpecName: "kube-api-access-swpnz") pod "324736fb-a998-4759-8ad3-7653af6392c9" (UID: "324736fb-a998-4759-8ad3-7653af6392c9"). InnerVolumeSpecName "kube-api-access-swpnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.035621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "324736fb-a998-4759-8ad3-7653af6392c9" (UID: "324736fb-a998-4759-8ad3-7653af6392c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.079962 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swpnz\" (UniqueName: \"kubernetes.io/projected/324736fb-a998-4759-8ad3-7653af6392c9-kube-api-access-swpnz\") on node \"crc\" DevicePath \"\"" Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.079999 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/324736fb-a998-4759-8ad3-7653af6392c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.532478 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p7rtg" event={"ID":"324736fb-a998-4759-8ad3-7653af6392c9","Type":"ContainerDied","Data":"92750a8d6d96720d83b394d9470a96dddced0429431cf452e405273e6895df3f"} Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.532572 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p7rtg" Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.532828 4760 scope.go:117] "RemoveContainer" containerID="75fdd77f28fe6e583d35bfa2c65c379463e16c3d11ef66651c91a1ab6a315404" Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.579231 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p7rtg"] Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.581324 4760 scope.go:117] "RemoveContainer" containerID="b395dac97034d119e1ea6baecfe4c7ac8d2e5c6866703cd69806ff260292d705" Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.598381 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p7rtg"] Nov 25 09:14:29 crc kubenswrapper[4760]: I1125 09:14:29.868712 4760 scope.go:117] "RemoveContainer" containerID="2bf943f4f5557cdcc983d3fa5bea978a2a99a9b4de7ae6f85083c89272a9b7bc" Nov 25 09:14:30 crc kubenswrapper[4760]: I1125 09:14:30.950805 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324736fb-a998-4759-8ad3-7653af6392c9" path="/var/lib/kubelet/pods/324736fb-a998-4759-8ad3-7653af6392c9/volumes" Nov 25 09:14:31 crc kubenswrapper[4760]: I1125 09:14:31.746157 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:14:31 crc kubenswrapper[4760]: I1125 09:14:31.746273 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.058115 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmfgv"] Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.058553 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pmfgv" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="registry-server" containerID="cri-o://a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62" gracePeriod=2 Nov 25 09:14:32 crc kubenswrapper[4760]: E1125 09:14:32.522352 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62 is running failed: container process not found" containerID="a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 09:14:32 crc kubenswrapper[4760]: E1125 09:14:32.523183 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62 is running failed: container process not found" containerID="a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 09:14:32 crc kubenswrapper[4760]: E1125 09:14:32.523664 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62 is running failed: container process not found" containerID="a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 09:14:32 crc kubenswrapper[4760]: E1125 09:14:32.523731 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pmfgv" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="registry-server" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.576123 4760 generic.go:334] "Generic (PLEG): container finished" podID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerID="a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62" exitCode=0 Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.576454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmfgv" event={"ID":"8a34eebb-219b-4a44-ba93-8b8158edcbc9","Type":"ContainerDied","Data":"a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62"} Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.714829 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.760772 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-catalog-content\") pod \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.761232 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-utilities\") pod \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.761431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp7rh\" (UniqueName: \"kubernetes.io/projected/8a34eebb-219b-4a44-ba93-8b8158edcbc9-kube-api-access-pp7rh\") pod \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\" (UID: \"8a34eebb-219b-4a44-ba93-8b8158edcbc9\") " Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.761853 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-utilities" (OuterVolumeSpecName: "utilities") pod "8a34eebb-219b-4a44-ba93-8b8158edcbc9" (UID: "8a34eebb-219b-4a44-ba93-8b8158edcbc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.762215 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.771475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a34eebb-219b-4a44-ba93-8b8158edcbc9-kube-api-access-pp7rh" (OuterVolumeSpecName: "kube-api-access-pp7rh") pod "8a34eebb-219b-4a44-ba93-8b8158edcbc9" (UID: "8a34eebb-219b-4a44-ba93-8b8158edcbc9"). InnerVolumeSpecName "kube-api-access-pp7rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.844163 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a34eebb-219b-4a44-ba93-8b8158edcbc9" (UID: "8a34eebb-219b-4a44-ba93-8b8158edcbc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.864449 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp7rh\" (UniqueName: \"kubernetes.io/projected/8a34eebb-219b-4a44-ba93-8b8158edcbc9-kube-api-access-pp7rh\") on node \"crc\" DevicePath \"\"" Nov 25 09:14:32 crc kubenswrapper[4760]: I1125 09:14:32.864492 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a34eebb-219b-4a44-ba93-8b8158edcbc9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:14:33 crc kubenswrapper[4760]: I1125 09:14:33.588345 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmfgv" event={"ID":"8a34eebb-219b-4a44-ba93-8b8158edcbc9","Type":"ContainerDied","Data":"ed5a094318dbf46db6bf1aeb36426c66f38c038f7af125a9bf80e0cc2179ae4f"} Nov 25 09:14:33 crc kubenswrapper[4760]: I1125 09:14:33.588403 4760 scope.go:117] "RemoveContainer" containerID="a471ad25fea5c8de50ad17d2e962b557c3edfdd57c3147581558263f56796b62" Nov 25 09:14:33 crc kubenswrapper[4760]: I1125 09:14:33.588551 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmfgv" Nov 25 09:14:33 crc kubenswrapper[4760]: I1125 09:14:33.617728 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmfgv"] Nov 25 09:14:33 crc kubenswrapper[4760]: I1125 09:14:33.622180 4760 scope.go:117] "RemoveContainer" containerID="051124b39e972dc370da2f24984a82a0488185c0cf09acdc8b1dae5198ce7472" Nov 25 09:14:33 crc kubenswrapper[4760]: I1125 09:14:33.630970 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pmfgv"] Nov 25 09:14:33 crc kubenswrapper[4760]: I1125 09:14:33.667834 4760 scope.go:117] "RemoveContainer" containerID="2597d58f004dbf0fcc08bf68300b2609372fea62b7de6131acc048f86f21105d" Nov 25 09:14:34 crc kubenswrapper[4760]: I1125 09:14:34.963169 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" path="/var/lib/kubelet/pods/8a34eebb-219b-4a44-ba93-8b8158edcbc9/volumes" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.191970 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr"] Nov 25 09:15:00 crc kubenswrapper[4760]: E1125 09:15:00.193072 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="extract-utilities" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193089 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="extract-utilities" Nov 25 09:15:00 crc kubenswrapper[4760]: E1125 09:15:00.193108 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193115 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[4760]: E1125 09:15:00.193128 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193136 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[4760]: E1125 09:15:00.193172 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="extract-content" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193178 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="extract-content" Nov 25 09:15:00 crc kubenswrapper[4760]: E1125 09:15:00.193205 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="extract-utilities" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193212 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="extract-utilities" Nov 25 09:15:00 crc kubenswrapper[4760]: E1125 09:15:00.193232 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="extract-content" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193239 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="extract-content" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193855 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a34eebb-219b-4a44-ba93-8b8158edcbc9" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.193882 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="324736fb-a998-4759-8ad3-7653af6392c9" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.195126 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.197676 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.205012 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.213858 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr"] Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.308447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clgt2\" (UniqueName: \"kubernetes.io/projected/172cd1f3-5243-4c6f-910d-b29aa186283e-kube-api-access-clgt2\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.308542 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/172cd1f3-5243-4c6f-910d-b29aa186283e-secret-volume\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.308748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/172cd1f3-5243-4c6f-910d-b29aa186283e-config-volume\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.410562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clgt2\" (UniqueName: \"kubernetes.io/projected/172cd1f3-5243-4c6f-910d-b29aa186283e-kube-api-access-clgt2\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.410697 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/172cd1f3-5243-4c6f-910d-b29aa186283e-secret-volume\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.410789 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/172cd1f3-5243-4c6f-910d-b29aa186283e-config-volume\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.411798 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/172cd1f3-5243-4c6f-910d-b29aa186283e-config-volume\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.416924 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/172cd1f3-5243-4c6f-910d-b29aa186283e-secret-volume\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.428904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clgt2\" (UniqueName: \"kubernetes.io/projected/172cd1f3-5243-4c6f-910d-b29aa186283e-kube-api-access-clgt2\") pod \"collect-profiles-29401035-7f5cr\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:00 crc kubenswrapper[4760]: I1125 09:15:00.520102 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.020594 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr"] Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.746684 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.747081 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.747141 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.748067 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.748130 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" gracePeriod=600 Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.857175 4760 generic.go:334] "Generic (PLEG): container finished" podID="172cd1f3-5243-4c6f-910d-b29aa186283e" containerID="15117536801b19e9b04add2c5ee1d092f20ec41c7be1acbbe18ce218eaf41cac" exitCode=0 Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.857222 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" event={"ID":"172cd1f3-5243-4c6f-910d-b29aa186283e","Type":"ContainerDied","Data":"15117536801b19e9b04add2c5ee1d092f20ec41c7be1acbbe18ce218eaf41cac"} Nov 25 09:15:01 crc kubenswrapper[4760]: I1125 09:15:01.857261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" event={"ID":"172cd1f3-5243-4c6f-910d-b29aa186283e","Type":"ContainerStarted","Data":"6efb8c456ca2bfc1f901062fb521c26fe766230f3439d6e8983ee6bfb398b3da"} Nov 25 09:15:01 crc kubenswrapper[4760]: E1125 09:15:01.881183 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:15:02 crc kubenswrapper[4760]: I1125 09:15:02.868602 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" exitCode=0 Nov 25 09:15:02 crc kubenswrapper[4760]: I1125 09:15:02.868701 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94"} Nov 25 09:15:02 crc kubenswrapper[4760]: I1125 09:15:02.869202 4760 scope.go:117] "RemoveContainer" containerID="95a670d13c42eb3ac6f3e3f1ae28374eb936ec37ccc3d0a7aab18131fbbe2cba" Nov 25 09:15:02 crc kubenswrapper[4760]: I1125 09:15:02.869878 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:15:02 crc kubenswrapper[4760]: E1125 09:15:02.870281 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.466485 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.500975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/172cd1f3-5243-4c6f-910d-b29aa186283e-config-volume\") pod \"172cd1f3-5243-4c6f-910d-b29aa186283e\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.501038 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/172cd1f3-5243-4c6f-910d-b29aa186283e-secret-volume\") pod \"172cd1f3-5243-4c6f-910d-b29aa186283e\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.501141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clgt2\" (UniqueName: \"kubernetes.io/projected/172cd1f3-5243-4c6f-910d-b29aa186283e-kube-api-access-clgt2\") pod \"172cd1f3-5243-4c6f-910d-b29aa186283e\" (UID: \"172cd1f3-5243-4c6f-910d-b29aa186283e\") " Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.502830 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/172cd1f3-5243-4c6f-910d-b29aa186283e-config-volume" (OuterVolumeSpecName: "config-volume") pod "172cd1f3-5243-4c6f-910d-b29aa186283e" (UID: "172cd1f3-5243-4c6f-910d-b29aa186283e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.507787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172cd1f3-5243-4c6f-910d-b29aa186283e-kube-api-access-clgt2" (OuterVolumeSpecName: "kube-api-access-clgt2") pod "172cd1f3-5243-4c6f-910d-b29aa186283e" (UID: "172cd1f3-5243-4c6f-910d-b29aa186283e"). InnerVolumeSpecName "kube-api-access-clgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.533758 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172cd1f3-5243-4c6f-910d-b29aa186283e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "172cd1f3-5243-4c6f-910d-b29aa186283e" (UID: "172cd1f3-5243-4c6f-910d-b29aa186283e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.603348 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/172cd1f3-5243-4c6f-910d-b29aa186283e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.603377 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/172cd1f3-5243-4c6f-910d-b29aa186283e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.603389 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clgt2\" (UniqueName: \"kubernetes.io/projected/172cd1f3-5243-4c6f-910d-b29aa186283e-kube-api-access-clgt2\") on node \"crc\" DevicePath \"\"" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.883055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" event={"ID":"172cd1f3-5243-4c6f-910d-b29aa186283e","Type":"ContainerDied","Data":"6efb8c456ca2bfc1f901062fb521c26fe766230f3439d6e8983ee6bfb398b3da"} Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.883422 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6efb8c456ca2bfc1f901062fb521c26fe766230f3439d6e8983ee6bfb398b3da" Nov 25 09:15:03 crc kubenswrapper[4760]: I1125 09:15:03.883074 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr" Nov 25 09:15:04 crc kubenswrapper[4760]: I1125 09:15:04.545613 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4"] Nov 25 09:15:04 crc kubenswrapper[4760]: I1125 09:15:04.554597 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-zl6p4"] Nov 25 09:15:04 crc kubenswrapper[4760]: I1125 09:15:04.956398 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9012ddf-738f-4b3e-99ce-0aab039a4171" path="/var/lib/kubelet/pods/a9012ddf-738f-4b3e-99ce-0aab039a4171/volumes" Nov 25 09:15:16 crc kubenswrapper[4760]: I1125 09:15:16.946084 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:15:16 crc kubenswrapper[4760]: E1125 09:15:16.947023 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:15:27 crc kubenswrapper[4760]: I1125 09:15:27.939929 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:15:27 crc kubenswrapper[4760]: E1125 09:15:27.943406 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:15:34 crc kubenswrapper[4760]: I1125 09:15:34.495962 4760 scope.go:117] "RemoveContainer" containerID="eae7eff043228114d341ca5d73e425432439abacff20b95fd2a9adbe2be14cf7" Nov 25 09:15:42 crc kubenswrapper[4760]: I1125 09:15:42.942570 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:15:42 crc kubenswrapper[4760]: E1125 09:15:42.943668 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:15:55 crc kubenswrapper[4760]: I1125 09:15:55.938197 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:15:55 crc kubenswrapper[4760]: E1125 09:15:55.938994 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:16:10 crc kubenswrapper[4760]: I1125 09:16:10.939114 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:16:10 crc kubenswrapper[4760]: E1125 09:16:10.940717 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:16:25 crc kubenswrapper[4760]: I1125 09:16:25.938525 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:16:25 crc kubenswrapper[4760]: E1125 09:16:25.939226 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:16:38 crc kubenswrapper[4760]: I1125 09:16:38.937725 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:16:38 crc kubenswrapper[4760]: E1125 09:16:38.938553 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:16:53 crc kubenswrapper[4760]: I1125 09:16:53.938330 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:16:53 crc kubenswrapper[4760]: E1125 09:16:53.939295 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.581173 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wttvh"] Nov 25 09:17:00 crc kubenswrapper[4760]: E1125 09:17:00.582507 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172cd1f3-5243-4c6f-910d-b29aa186283e" containerName="collect-profiles" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.582529 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="172cd1f3-5243-4c6f-910d-b29aa186283e" containerName="collect-profiles" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.582799 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="172cd1f3-5243-4c6f-910d-b29aa186283e" containerName="collect-profiles" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.584504 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.598087 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wttvh"] Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.695190 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-catalog-content\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.695406 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-utilities\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.695483 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7wr\" (UniqueName: \"kubernetes.io/projected/d7b41f29-cf62-428f-9aa2-e1223c74d24e-kube-api-access-hc7wr\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.796723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-catalog-content\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.796832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-utilities\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.796893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc7wr\" (UniqueName: \"kubernetes.io/projected/d7b41f29-cf62-428f-9aa2-e1223c74d24e-kube-api-access-hc7wr\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.797419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-catalog-content\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.797477 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-utilities\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.827161 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc7wr\" (UniqueName: \"kubernetes.io/projected/d7b41f29-cf62-428f-9aa2-e1223c74d24e-kube-api-access-hc7wr\") pod \"community-operators-wttvh\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:00 crc kubenswrapper[4760]: I1125 09:17:00.904812 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:01 crc kubenswrapper[4760]: I1125 09:17:01.436814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wttvh"] Nov 25 09:17:01 crc kubenswrapper[4760]: I1125 09:17:01.951098 4760 generic.go:334] "Generic (PLEG): container finished" podID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerID="b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246" exitCode=0 Nov 25 09:17:01 crc kubenswrapper[4760]: I1125 09:17:01.951301 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wttvh" event={"ID":"d7b41f29-cf62-428f-9aa2-e1223c74d24e","Type":"ContainerDied","Data":"b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246"} Nov 25 09:17:01 crc kubenswrapper[4760]: I1125 09:17:01.951397 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wttvh" event={"ID":"d7b41f29-cf62-428f-9aa2-e1223c74d24e","Type":"ContainerStarted","Data":"08e8a2da20422376c47da1a5ac5c4da36285b7c90d7977a61c740dc10f2791dd"} Nov 25 09:17:02 crc kubenswrapper[4760]: I1125 09:17:02.960545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wttvh" event={"ID":"d7b41f29-cf62-428f-9aa2-e1223c74d24e","Type":"ContainerStarted","Data":"9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4"} Nov 25 09:17:03 crc kubenswrapper[4760]: I1125 09:17:03.970747 4760 generic.go:334] "Generic (PLEG): container finished" podID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerID="9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4" exitCode=0 Nov 25 09:17:03 crc kubenswrapper[4760]: I1125 09:17:03.970811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wttvh" event={"ID":"d7b41f29-cf62-428f-9aa2-e1223c74d24e","Type":"ContainerDied","Data":"9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4"} Nov 25 09:17:04 crc kubenswrapper[4760]: I1125 09:17:04.983616 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wttvh" event={"ID":"d7b41f29-cf62-428f-9aa2-e1223c74d24e","Type":"ContainerStarted","Data":"fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd"} Nov 25 09:17:05 crc kubenswrapper[4760]: I1125 09:17:05.005710 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wttvh" podStartSLOduration=2.56538912 podStartE2EDuration="5.005685223s" podCreationTimestamp="2025-11-25 09:17:00 +0000 UTC" firstStartedPulling="2025-11-25 09:17:01.954620582 +0000 UTC m=+3955.663651377" lastFinishedPulling="2025-11-25 09:17:04.394916685 +0000 UTC m=+3958.103947480" observedRunningTime="2025-11-25 09:17:05.00172551 +0000 UTC m=+3958.710756305" watchObservedRunningTime="2025-11-25 09:17:05.005685223 +0000 UTC m=+3958.714716018" Nov 25 09:17:06 crc kubenswrapper[4760]: I1125 09:17:06.946110 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:17:06 crc kubenswrapper[4760]: E1125 09:17:06.946581 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:17:10 crc kubenswrapper[4760]: I1125 09:17:10.905355 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:10 crc kubenswrapper[4760]: I1125 09:17:10.905925 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:10 crc kubenswrapper[4760]: I1125 09:17:10.971517 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:11 crc kubenswrapper[4760]: I1125 09:17:11.088607 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:11 crc kubenswrapper[4760]: I1125 09:17:11.202619 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wttvh"] Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.062995 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wttvh" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="registry-server" containerID="cri-o://fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd" gracePeriod=2 Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.718526 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.772813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc7wr\" (UniqueName: \"kubernetes.io/projected/d7b41f29-cf62-428f-9aa2-e1223c74d24e-kube-api-access-hc7wr\") pod \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.772906 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-catalog-content\") pod \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.772942 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-utilities\") pod \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\" (UID: \"d7b41f29-cf62-428f-9aa2-e1223c74d24e\") " Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.774044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-utilities" (OuterVolumeSpecName: "utilities") pod "d7b41f29-cf62-428f-9aa2-e1223c74d24e" (UID: "d7b41f29-cf62-428f-9aa2-e1223c74d24e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.781498 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b41f29-cf62-428f-9aa2-e1223c74d24e-kube-api-access-hc7wr" (OuterVolumeSpecName: "kube-api-access-hc7wr") pod "d7b41f29-cf62-428f-9aa2-e1223c74d24e" (UID: "d7b41f29-cf62-428f-9aa2-e1223c74d24e"). InnerVolumeSpecName "kube-api-access-hc7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.835937 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7b41f29-cf62-428f-9aa2-e1223c74d24e" (UID: "d7b41f29-cf62-428f-9aa2-e1223c74d24e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.875574 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.875612 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b41f29-cf62-428f-9aa2-e1223c74d24e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:17:13 crc kubenswrapper[4760]: I1125 09:17:13.875622 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc7wr\" (UniqueName: \"kubernetes.io/projected/d7b41f29-cf62-428f-9aa2-e1223c74d24e-kube-api-access-hc7wr\") on node \"crc\" DevicePath \"\"" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.073901 4760 generic.go:334] "Generic (PLEG): container finished" podID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerID="fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd" exitCode=0 Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.073968 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wttvh" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.074019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wttvh" event={"ID":"d7b41f29-cf62-428f-9aa2-e1223c74d24e","Type":"ContainerDied","Data":"fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd"} Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.075014 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wttvh" event={"ID":"d7b41f29-cf62-428f-9aa2-e1223c74d24e","Type":"ContainerDied","Data":"08e8a2da20422376c47da1a5ac5c4da36285b7c90d7977a61c740dc10f2791dd"} Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.075033 4760 scope.go:117] "RemoveContainer" containerID="fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.094945 4760 scope.go:117] "RemoveContainer" containerID="9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.117262 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wttvh"] Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.123691 4760 scope.go:117] "RemoveContainer" containerID="b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.127407 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wttvh"] Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.167897 4760 scope.go:117] "RemoveContainer" containerID="fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd" Nov 25 09:17:14 crc kubenswrapper[4760]: E1125 09:17:14.168432 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd\": container with ID starting with fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd not found: ID does not exist" containerID="fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.168475 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd"} err="failed to get container status \"fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd\": rpc error: code = NotFound desc = could not find container \"fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd\": container with ID starting with fc1c10d5534a1de16e8e7db9c14e09d1b412dde1162ded9f423d229fbebc47fd not found: ID does not exist" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.168525 4760 scope.go:117] "RemoveContainer" containerID="9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4" Nov 25 09:17:14 crc kubenswrapper[4760]: E1125 09:17:14.171526 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4\": container with ID starting with 9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4 not found: ID does not exist" containerID="9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.171588 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4"} err="failed to get container status \"9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4\": rpc error: code = NotFound desc = could not find container \"9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4\": container with ID starting with 9940fb21eeb95729fb0907fde2913361378c3def338d6fd5ab38f96f9be19cc4 not found: ID does not exist" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.171632 4760 scope.go:117] "RemoveContainer" containerID="b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246" Nov 25 09:17:14 crc kubenswrapper[4760]: E1125 09:17:14.172581 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246\": container with ID starting with b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246 not found: ID does not exist" containerID="b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.172610 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246"} err="failed to get container status \"b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246\": rpc error: code = NotFound desc = could not find container \"b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246\": container with ID starting with b7c933ac9e35d866850e03d96a469c7639cec0c0652a1d83d0356dde1b0bb246 not found: ID does not exist" Nov 25 09:17:14 crc kubenswrapper[4760]: I1125 09:17:14.954272 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" path="/var/lib/kubelet/pods/d7b41f29-cf62-428f-9aa2-e1223c74d24e/volumes" Nov 25 09:17:18 crc kubenswrapper[4760]: I1125 09:17:18.938140 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:17:18 crc kubenswrapper[4760]: E1125 09:17:18.939444 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:17:30 crc kubenswrapper[4760]: I1125 09:17:30.938895 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:17:30 crc kubenswrapper[4760]: E1125 09:17:30.939515 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:17:43 crc kubenswrapper[4760]: I1125 09:17:43.939316 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:17:43 crc kubenswrapper[4760]: E1125 09:17:43.940351 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:17:58 crc kubenswrapper[4760]: I1125 09:17:58.938548 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:17:58 crc kubenswrapper[4760]: E1125 09:17:58.939362 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:18:10 crc kubenswrapper[4760]: I1125 09:18:10.938760 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:18:10 crc kubenswrapper[4760]: E1125 09:18:10.939713 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:18:22 crc kubenswrapper[4760]: I1125 09:18:22.939534 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:18:22 crc kubenswrapper[4760]: E1125 09:18:22.940425 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:18:36 crc kubenswrapper[4760]: I1125 09:18:36.952529 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:18:36 crc kubenswrapper[4760]: E1125 09:18:36.953281 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:18:48 crc kubenswrapper[4760]: I1125 09:18:48.939207 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:18:48 crc kubenswrapper[4760]: E1125 09:18:48.941305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.418925 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bxblb"] Nov 25 09:18:52 crc kubenswrapper[4760]: E1125 09:18:52.419640 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="registry-server" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.419655 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="registry-server" Nov 25 09:18:52 crc kubenswrapper[4760]: E1125 09:18:52.419694 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="extract-utilities" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.419701 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="extract-utilities" Nov 25 09:18:52 crc kubenswrapper[4760]: E1125 09:18:52.419719 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="extract-content" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.419725 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="extract-content" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.419914 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b41f29-cf62-428f-9aa2-e1223c74d24e" containerName="registry-server" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.421445 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.455682 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxblb"] Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.573911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-catalog-content\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.574035 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bwd\" (UniqueName: \"kubernetes.io/projected/a3eb4a58-0413-4b4b-af51-8510b4c21b70-kube-api-access-k8bwd\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.574090 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-utilities\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.678648 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bwd\" (UniqueName: \"kubernetes.io/projected/a3eb4a58-0413-4b4b-af51-8510b4c21b70-kube-api-access-k8bwd\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.678729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-utilities\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.678861 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-catalog-content\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.680274 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-catalog-content\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:52 crc kubenswrapper[4760]: I1125 09:18:52.680513 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-utilities\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:53 crc kubenswrapper[4760]: I1125 09:18:53.357513 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bwd\" (UniqueName: \"kubernetes.io/projected/a3eb4a58-0413-4b4b-af51-8510b4c21b70-kube-api-access-k8bwd\") pod \"redhat-marketplace-bxblb\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:53 crc kubenswrapper[4760]: I1125 09:18:53.653700 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:18:54 crc kubenswrapper[4760]: I1125 09:18:54.168954 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxblb"] Nov 25 09:18:55 crc kubenswrapper[4760]: I1125 09:18:55.031653 4760 generic.go:334] "Generic (PLEG): container finished" podID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerID="425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8" exitCode=0 Nov 25 09:18:55 crc kubenswrapper[4760]: I1125 09:18:55.031759 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxblb" event={"ID":"a3eb4a58-0413-4b4b-af51-8510b4c21b70","Type":"ContainerDied","Data":"425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8"} Nov 25 09:18:55 crc kubenswrapper[4760]: I1125 09:18:55.031937 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxblb" event={"ID":"a3eb4a58-0413-4b4b-af51-8510b4c21b70","Type":"ContainerStarted","Data":"6d854439446d73a2df2c32f280f89c81d7903e84844b3a87b7af5d1a161d2109"} Nov 25 09:18:55 crc kubenswrapper[4760]: I1125 09:18:55.033668 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:18:56 crc kubenswrapper[4760]: I1125 09:18:56.045176 4760 generic.go:334] "Generic (PLEG): container finished" podID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerID="55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162" exitCode=0 Nov 25 09:18:56 crc kubenswrapper[4760]: I1125 09:18:56.045212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxblb" event={"ID":"a3eb4a58-0413-4b4b-af51-8510b4c21b70","Type":"ContainerDied","Data":"55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162"} Nov 25 09:18:57 crc kubenswrapper[4760]: I1125 09:18:57.062065 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxblb" event={"ID":"a3eb4a58-0413-4b4b-af51-8510b4c21b70","Type":"ContainerStarted","Data":"2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12"} Nov 25 09:18:57 crc kubenswrapper[4760]: I1125 09:18:57.086017 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bxblb" podStartSLOduration=3.587996243 podStartE2EDuration="5.086000481s" podCreationTimestamp="2025-11-25 09:18:52 +0000 UTC" firstStartedPulling="2025-11-25 09:18:55.03344446 +0000 UTC m=+4068.742475255" lastFinishedPulling="2025-11-25 09:18:56.531448698 +0000 UTC m=+4070.240479493" observedRunningTime="2025-11-25 09:18:57.084892219 +0000 UTC m=+4070.793923014" watchObservedRunningTime="2025-11-25 09:18:57.086000481 +0000 UTC m=+4070.795031276" Nov 25 09:19:00 crc kubenswrapper[4760]: I1125 09:19:00.939316 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:19:00 crc kubenswrapper[4760]: E1125 09:19:00.940251 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:19:03 crc kubenswrapper[4760]: I1125 09:19:03.654620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:19:03 crc kubenswrapper[4760]: I1125 09:19:03.655168 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:19:03 crc kubenswrapper[4760]: I1125 09:19:03.998934 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:19:04 crc kubenswrapper[4760]: I1125 09:19:04.170774 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:19:04 crc kubenswrapper[4760]: I1125 09:19:04.235305 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxblb"] Nov 25 09:19:06 crc kubenswrapper[4760]: I1125 09:19:06.138827 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bxblb" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="registry-server" containerID="cri-o://2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12" gracePeriod=2 Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.018730 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.096015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-catalog-content\") pod \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.096169 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-utilities\") pod \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.096422 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8bwd\" (UniqueName: \"kubernetes.io/projected/a3eb4a58-0413-4b4b-af51-8510b4c21b70-kube-api-access-k8bwd\") pod \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\" (UID: \"a3eb4a58-0413-4b4b-af51-8510b4c21b70\") " Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.099699 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-utilities" (OuterVolumeSpecName: "utilities") pod "a3eb4a58-0413-4b4b-af51-8510b4c21b70" (UID: "a3eb4a58-0413-4b4b-af51-8510b4c21b70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.104944 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3eb4a58-0413-4b4b-af51-8510b4c21b70-kube-api-access-k8bwd" (OuterVolumeSpecName: "kube-api-access-k8bwd") pod "a3eb4a58-0413-4b4b-af51-8510b4c21b70" (UID: "a3eb4a58-0413-4b4b-af51-8510b4c21b70"). InnerVolumeSpecName "kube-api-access-k8bwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.118722 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3eb4a58-0413-4b4b-af51-8510b4c21b70" (UID: "a3eb4a58-0413-4b4b-af51-8510b4c21b70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.148408 4760 generic.go:334] "Generic (PLEG): container finished" podID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerID="2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12" exitCode=0 Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.148453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxblb" event={"ID":"a3eb4a58-0413-4b4b-af51-8510b4c21b70","Type":"ContainerDied","Data":"2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12"} Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.148482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxblb" event={"ID":"a3eb4a58-0413-4b4b-af51-8510b4c21b70","Type":"ContainerDied","Data":"6d854439446d73a2df2c32f280f89c81d7903e84844b3a87b7af5d1a161d2109"} Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.148498 4760 scope.go:117] "RemoveContainer" containerID="2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.149687 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxblb" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.183737 4760 scope.go:117] "RemoveContainer" containerID="55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.199260 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.199292 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8bwd\" (UniqueName: \"kubernetes.io/projected/a3eb4a58-0413-4b4b-af51-8510b4c21b70-kube-api-access-k8bwd\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.199302 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3eb4a58-0413-4b4b-af51-8510b4c21b70-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.205504 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxblb"] Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.224382 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxblb"] Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.224914 4760 scope.go:117] "RemoveContainer" containerID="425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.254740 4760 scope.go:117] "RemoveContainer" containerID="2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12" Nov 25 09:19:07 crc kubenswrapper[4760]: E1125 09:19:07.255159 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12\": container with ID starting with 2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12 not found: ID does not exist" containerID="2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.255214 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12"} err="failed to get container status \"2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12\": rpc error: code = NotFound desc = could not find container \"2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12\": container with ID starting with 2cf0c44532c6eccee1bd5c607d3916e0b772fb246edb8a189597978c58f54f12 not found: ID does not exist" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.255257 4760 scope.go:117] "RemoveContainer" containerID="55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162" Nov 25 09:19:07 crc kubenswrapper[4760]: E1125 09:19:07.255671 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162\": container with ID starting with 55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162 not found: ID does not exist" containerID="55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.255705 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162"} err="failed to get container status \"55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162\": rpc error: code = NotFound desc = could not find container \"55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162\": container with ID starting with 55a6a5fa70e215fff06f03f74312ce11851518a22007d0ce502dd1675b647162 not found: ID does not exist" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.255731 4760 scope.go:117] "RemoveContainer" containerID="425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8" Nov 25 09:19:07 crc kubenswrapper[4760]: E1125 09:19:07.255959 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8\": container with ID starting with 425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8 not found: ID does not exist" containerID="425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8" Nov 25 09:19:07 crc kubenswrapper[4760]: I1125 09:19:07.255989 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8"} err="failed to get container status \"425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8\": rpc error: code = NotFound desc = could not find container \"425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8\": container with ID starting with 425b79b07b58da12c8323446fb50a744d481d1353a810e9e171e388854bba2d8 not found: ID does not exist" Nov 25 09:19:08 crc kubenswrapper[4760]: I1125 09:19:08.951521 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" path="/var/lib/kubelet/pods/a3eb4a58-0413-4b4b-af51-8510b4c21b70/volumes" Nov 25 09:19:15 crc kubenswrapper[4760]: I1125 09:19:15.938970 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:19:15 crc kubenswrapper[4760]: E1125 09:19:15.939896 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:19:29 crc kubenswrapper[4760]: I1125 09:19:29.939102 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:19:29 crc kubenswrapper[4760]: E1125 09:19:29.939934 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:19:44 crc kubenswrapper[4760]: I1125 09:19:44.938616 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:19:44 crc kubenswrapper[4760]: E1125 09:19:44.939407 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:19:57 crc kubenswrapper[4760]: I1125 09:19:57.938625 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:19:57 crc kubenswrapper[4760]: E1125 09:19:57.939492 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:20:08 crc kubenswrapper[4760]: I1125 09:20:08.938667 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:20:10 crc kubenswrapper[4760]: I1125 09:20:10.707082 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"73b41e64a6f9555e01224f0a956c057f05ac78b71d209c9d6f20eedffb258f91"} Nov 25 09:22:31 crc kubenswrapper[4760]: I1125 09:22:31.746529 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:22:31 crc kubenswrapper[4760]: I1125 09:22:31.747078 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:23:01 crc kubenswrapper[4760]: I1125 09:23:01.746125 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:23:01 crc kubenswrapper[4760]: I1125 09:23:01.746757 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:23:31 crc kubenswrapper[4760]: I1125 09:23:31.746497 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:23:31 crc kubenswrapper[4760]: I1125 09:23:31.747032 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:23:31 crc kubenswrapper[4760]: I1125 09:23:31.747079 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:23:31 crc kubenswrapper[4760]: I1125 09:23:31.747842 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73b41e64a6f9555e01224f0a956c057f05ac78b71d209c9d6f20eedffb258f91"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:23:31 crc kubenswrapper[4760]: I1125 09:23:31.747896 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://73b41e64a6f9555e01224f0a956c057f05ac78b71d209c9d6f20eedffb258f91" gracePeriod=600 Nov 25 09:23:32 crc kubenswrapper[4760]: I1125 09:23:32.504935 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="73b41e64a6f9555e01224f0a956c057f05ac78b71d209c9d6f20eedffb258f91" exitCode=0 Nov 25 09:23:32 crc kubenswrapper[4760]: I1125 09:23:32.505018 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"73b41e64a6f9555e01224f0a956c057f05ac78b71d209c9d6f20eedffb258f91"} Nov 25 09:23:32 crc kubenswrapper[4760]: I1125 09:23:32.505532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9"} Nov 25 09:23:32 crc kubenswrapper[4760]: I1125 09:23:32.505558 4760 scope.go:117] "RemoveContainer" containerID="5bc10df09eda9e8fe3fd17584495d53469c43beefc03c54db2d24afdeb394b94" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.226314 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ftmkw"] Nov 25 09:24:30 crc kubenswrapper[4760]: E1125 09:24:30.228733 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="registry-server" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.228829 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="registry-server" Nov 25 09:24:30 crc kubenswrapper[4760]: E1125 09:24:30.228924 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="extract-utilities" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.228985 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="extract-utilities" Nov 25 09:24:30 crc kubenswrapper[4760]: E1125 09:24:30.229046 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="extract-content" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.229101 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="extract-content" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.229374 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3eb4a58-0413-4b4b-af51-8510b4c21b70" containerName="registry-server" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.230897 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.237671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftmkw"] Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.368108 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpttj\" (UniqueName: \"kubernetes.io/projected/ef94b49c-1545-472d-a7b3-98cce33efb31-kube-api-access-gpttj\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.368637 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-catalog-content\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.368716 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-utilities\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.470072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpttj\" (UniqueName: \"kubernetes.io/projected/ef94b49c-1545-472d-a7b3-98cce33efb31-kube-api-access-gpttj\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.470195 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-catalog-content\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.470235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-utilities\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.470804 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-utilities\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.471022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-catalog-content\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.495800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpttj\" (UniqueName: \"kubernetes.io/projected/ef94b49c-1545-472d-a7b3-98cce33efb31-kube-api-access-gpttj\") pod \"redhat-operators-ftmkw\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:30 crc kubenswrapper[4760]: I1125 09:24:30.550466 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:31 crc kubenswrapper[4760]: I1125 09:24:31.041385 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftmkw"] Nov 25 09:24:31 crc kubenswrapper[4760]: W1125 09:24:31.045713 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef94b49c_1545_472d_a7b3_98cce33efb31.slice/crio-b7082f3b56b36c5d44cda4539156ac3b0dfc833abb9a7c1b51ac611594142be2 WatchSource:0}: Error finding container b7082f3b56b36c5d44cda4539156ac3b0dfc833abb9a7c1b51ac611594142be2: Status 404 returned error can't find the container with id b7082f3b56b36c5d44cda4539156ac3b0dfc833abb9a7c1b51ac611594142be2 Nov 25 09:24:31 crc kubenswrapper[4760]: I1125 09:24:31.066117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftmkw" event={"ID":"ef94b49c-1545-472d-a7b3-98cce33efb31","Type":"ContainerStarted","Data":"b7082f3b56b36c5d44cda4539156ac3b0dfc833abb9a7c1b51ac611594142be2"} Nov 25 09:24:32 crc kubenswrapper[4760]: I1125 09:24:32.076651 4760 generic.go:334] "Generic (PLEG): container finished" podID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerID="ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937" exitCode=0 Nov 25 09:24:32 crc kubenswrapper[4760]: I1125 09:24:32.076918 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftmkw" event={"ID":"ef94b49c-1545-472d-a7b3-98cce33efb31","Type":"ContainerDied","Data":"ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937"} Nov 25 09:24:32 crc kubenswrapper[4760]: I1125 09:24:32.079815 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:24:34 crc kubenswrapper[4760]: I1125 09:24:34.093924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftmkw" event={"ID":"ef94b49c-1545-472d-a7b3-98cce33efb31","Type":"ContainerStarted","Data":"6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425"} Nov 25 09:24:34 crc kubenswrapper[4760]: E1125 09:24:34.355002 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef94b49c_1545_472d_a7b3_98cce33efb31.slice/crio-conmon-6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425.scope\": RecentStats: unable to find data in memory cache]" Nov 25 09:24:35 crc kubenswrapper[4760]: I1125 09:24:35.105829 4760 generic.go:334] "Generic (PLEG): container finished" podID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerID="6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425" exitCode=0 Nov 25 09:24:35 crc kubenswrapper[4760]: I1125 09:24:35.105973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftmkw" event={"ID":"ef94b49c-1545-472d-a7b3-98cce33efb31","Type":"ContainerDied","Data":"6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425"} Nov 25 09:24:36 crc kubenswrapper[4760]: I1125 09:24:36.120913 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftmkw" event={"ID":"ef94b49c-1545-472d-a7b3-98cce33efb31","Type":"ContainerStarted","Data":"36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788"} Nov 25 09:24:36 crc kubenswrapper[4760]: I1125 09:24:36.159296 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ftmkw" podStartSLOduration=2.383405955 podStartE2EDuration="6.159267713s" podCreationTimestamp="2025-11-25 09:24:30 +0000 UTC" firstStartedPulling="2025-11-25 09:24:32.079501191 +0000 UTC m=+4405.788531986" lastFinishedPulling="2025-11-25 09:24:35.855362939 +0000 UTC m=+4409.564393744" observedRunningTime="2025-11-25 09:24:36.150499154 +0000 UTC m=+4409.859529989" watchObservedRunningTime="2025-11-25 09:24:36.159267713 +0000 UTC m=+4409.868298518" Nov 25 09:24:40 crc kubenswrapper[4760]: I1125 09:24:40.551232 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:40 crc kubenswrapper[4760]: I1125 09:24:40.551777 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:41 crc kubenswrapper[4760]: I1125 09:24:41.620691 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ftmkw" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="registry-server" probeResult="failure" output=< Nov 25 09:24:41 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:24:41 crc kubenswrapper[4760]: > Nov 25 09:24:50 crc kubenswrapper[4760]: I1125 09:24:50.608696 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:50 crc kubenswrapper[4760]: I1125 09:24:50.664221 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:50 crc kubenswrapper[4760]: I1125 09:24:50.850288 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftmkw"] Nov 25 09:24:52 crc kubenswrapper[4760]: I1125 09:24:52.249918 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ftmkw" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="registry-server" containerID="cri-o://36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788" gracePeriod=2 Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.005530 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.098833 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpttj\" (UniqueName: \"kubernetes.io/projected/ef94b49c-1545-472d-a7b3-98cce33efb31-kube-api-access-gpttj\") pod \"ef94b49c-1545-472d-a7b3-98cce33efb31\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.104770 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef94b49c-1545-472d-a7b3-98cce33efb31-kube-api-access-gpttj" (OuterVolumeSpecName: "kube-api-access-gpttj") pod "ef94b49c-1545-472d-a7b3-98cce33efb31" (UID: "ef94b49c-1545-472d-a7b3-98cce33efb31"). InnerVolumeSpecName "kube-api-access-gpttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.201313 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-catalog-content\") pod \"ef94b49c-1545-472d-a7b3-98cce33efb31\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.201461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-utilities\") pod \"ef94b49c-1545-472d-a7b3-98cce33efb31\" (UID: \"ef94b49c-1545-472d-a7b3-98cce33efb31\") " Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.202137 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpttj\" (UniqueName: \"kubernetes.io/projected/ef94b49c-1545-472d-a7b3-98cce33efb31-kube-api-access-gpttj\") on node \"crc\" DevicePath \"\"" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.203123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-utilities" (OuterVolumeSpecName: "utilities") pod "ef94b49c-1545-472d-a7b3-98cce33efb31" (UID: "ef94b49c-1545-472d-a7b3-98cce33efb31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.260784 4760 generic.go:334] "Generic (PLEG): container finished" podID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerID="36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788" exitCode=0 Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.260835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftmkw" event={"ID":"ef94b49c-1545-472d-a7b3-98cce33efb31","Type":"ContainerDied","Data":"36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788"} Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.260865 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftmkw" event={"ID":"ef94b49c-1545-472d-a7b3-98cce33efb31","Type":"ContainerDied","Data":"b7082f3b56b36c5d44cda4539156ac3b0dfc833abb9a7c1b51ac611594142be2"} Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.260884 4760 scope.go:117] "RemoveContainer" containerID="36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.261058 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftmkw" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.281825 4760 scope.go:117] "RemoveContainer" containerID="6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.302314 4760 scope.go:117] "RemoveContainer" containerID="ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.303669 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.313197 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef94b49c-1545-472d-a7b3-98cce33efb31" (UID: "ef94b49c-1545-472d-a7b3-98cce33efb31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.344673 4760 scope.go:117] "RemoveContainer" containerID="36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788" Nov 25 09:24:53 crc kubenswrapper[4760]: E1125 09:24:53.345461 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788\": container with ID starting with 36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788 not found: ID does not exist" containerID="36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.345621 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788"} err="failed to get container status \"36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788\": rpc error: code = NotFound desc = could not find container \"36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788\": container with ID starting with 36b228fa642551679d3a4a32f94740c77bdc4e51b0231c05e0cdafb5d255b788 not found: ID does not exist" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.345753 4760 scope.go:117] "RemoveContainer" containerID="6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425" Nov 25 09:24:53 crc kubenswrapper[4760]: E1125 09:24:53.346240 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425\": container with ID starting with 6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425 not found: ID does not exist" containerID="6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.346357 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425"} err="failed to get container status \"6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425\": rpc error: code = NotFound desc = could not find container \"6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425\": container with ID starting with 6b56ccb0ff8d0ec4662f3a34833f7e66f261e99a69cd905c02158f677dc9b425 not found: ID does not exist" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.346409 4760 scope.go:117] "RemoveContainer" containerID="ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937" Nov 25 09:24:53 crc kubenswrapper[4760]: E1125 09:24:53.346783 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937\": container with ID starting with ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937 not found: ID does not exist" containerID="ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.346828 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937"} err="failed to get container status \"ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937\": rpc error: code = NotFound desc = could not find container \"ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937\": container with ID starting with ef0105c6e24a045679768c8dda53f09bd127ec95f88c4bd64a2335b7d6960937 not found: ID does not exist" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.405312 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef94b49c-1545-472d-a7b3-98cce33efb31-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.595878 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftmkw"] Nov 25 09:24:53 crc kubenswrapper[4760]: I1125 09:24:53.608453 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ftmkw"] Nov 25 09:24:54 crc kubenswrapper[4760]: I1125 09:24:54.952353 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" path="/var/lib/kubelet/pods/ef94b49c-1545-472d-a7b3-98cce33efb31/volumes" Nov 25 09:26:01 crc kubenswrapper[4760]: I1125 09:26:01.746593 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:26:01 crc kubenswrapper[4760]: I1125 09:26:01.747124 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:26:31 crc kubenswrapper[4760]: I1125 09:26:31.746633 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:26:31 crc kubenswrapper[4760]: I1125 09:26:31.747329 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:27:01 crc kubenswrapper[4760]: I1125 09:27:01.747765 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:27:01 crc kubenswrapper[4760]: I1125 09:27:01.748200 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:27:01 crc kubenswrapper[4760]: I1125 09:27:01.748241 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:27:01 crc kubenswrapper[4760]: I1125 09:27:01.748955 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:27:01 crc kubenswrapper[4760]: I1125 09:27:01.748996 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" gracePeriod=600 Nov 25 09:27:01 crc kubenswrapper[4760]: E1125 09:27:01.879693 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:27:02 crc kubenswrapper[4760]: I1125 09:27:02.394336 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" exitCode=0 Nov 25 09:27:02 crc kubenswrapper[4760]: I1125 09:27:02.394404 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9"} Nov 25 09:27:02 crc kubenswrapper[4760]: I1125 09:27:02.394698 4760 scope.go:117] "RemoveContainer" containerID="73b41e64a6f9555e01224f0a956c057f05ac78b71d209c9d6f20eedffb258f91" Nov 25 09:27:02 crc kubenswrapper[4760]: I1125 09:27:02.395599 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:27:02 crc kubenswrapper[4760]: E1125 09:27:02.395991 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.082749 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x2vqv"] Nov 25 09:27:16 crc kubenswrapper[4760]: E1125 09:27:16.086060 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="extract-content" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.086168 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="extract-content" Nov 25 09:27:16 crc kubenswrapper[4760]: E1125 09:27:16.086286 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="extract-utilities" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.086369 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="extract-utilities" Nov 25 09:27:16 crc kubenswrapper[4760]: E1125 09:27:16.086533 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="registry-server" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.086609 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="registry-server" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.087057 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef94b49c-1545-472d-a7b3-98cce33efb31" containerName="registry-server" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.089493 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.125023 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2vqv"] Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.190351 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-utilities\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.190469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-catalog-content\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.190550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcr8q\" (UniqueName: \"kubernetes.io/projected/f6e0d32d-404f-4102-b55d-39e181169a00-kube-api-access-gcr8q\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.292586 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-utilities\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.292992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-catalog-content\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.293089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcr8q\" (UniqueName: \"kubernetes.io/projected/f6e0d32d-404f-4102-b55d-39e181169a00-kube-api-access-gcr8q\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.293178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-utilities\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.293476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-catalog-content\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.324047 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcr8q\" (UniqueName: \"kubernetes.io/projected/f6e0d32d-404f-4102-b55d-39e181169a00-kube-api-access-gcr8q\") pod \"certified-operators-x2vqv\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.414459 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:16 crc kubenswrapper[4760]: I1125 09:27:16.949457 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:27:16 crc kubenswrapper[4760]: E1125 09:27:16.950119 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:27:17 crc kubenswrapper[4760]: I1125 09:27:17.010636 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2vqv"] Nov 25 09:27:17 crc kubenswrapper[4760]: I1125 09:27:17.582508 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6e0d32d-404f-4102-b55d-39e181169a00" containerID="ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81" exitCode=0 Nov 25 09:27:17 crc kubenswrapper[4760]: I1125 09:27:17.582615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2vqv" event={"ID":"f6e0d32d-404f-4102-b55d-39e181169a00","Type":"ContainerDied","Data":"ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81"} Nov 25 09:27:17 crc kubenswrapper[4760]: I1125 09:27:17.583017 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2vqv" event={"ID":"f6e0d32d-404f-4102-b55d-39e181169a00","Type":"ContainerStarted","Data":"1c07e8ee36f68611ddbe837fdff81c48557275d82b353716c81715ec40a07104"} Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.297933 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bhb6q"] Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.309700 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhb6q"] Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.309839 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.445331 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-utilities\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.445691 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-catalog-content\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.445725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq878\" (UniqueName: \"kubernetes.io/projected/ea480579-5303-4162-ba18-434fc0a5b847-kube-api-access-mq878\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.547092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-utilities\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.547229 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-catalog-content\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.547361 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq878\" (UniqueName: \"kubernetes.io/projected/ea480579-5303-4162-ba18-434fc0a5b847-kube-api-access-mq878\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.547690 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-utilities\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.547690 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-catalog-content\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.570961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq878\" (UniqueName: \"kubernetes.io/projected/ea480579-5303-4162-ba18-434fc0a5b847-kube-api-access-mq878\") pod \"community-operators-bhb6q\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.596908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2vqv" event={"ID":"f6e0d32d-404f-4102-b55d-39e181169a00","Type":"ContainerStarted","Data":"fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4"} Nov 25 09:27:18 crc kubenswrapper[4760]: I1125 09:27:18.671166 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:19 crc kubenswrapper[4760]: I1125 09:27:19.218816 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhb6q"] Nov 25 09:27:19 crc kubenswrapper[4760]: W1125 09:27:19.240581 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea480579_5303_4162_ba18_434fc0a5b847.slice/crio-873882949a0a9b85f9e031e97da566643da75b9defd301e27e21449716b92bf3 WatchSource:0}: Error finding container 873882949a0a9b85f9e031e97da566643da75b9defd301e27e21449716b92bf3: Status 404 returned error can't find the container with id 873882949a0a9b85f9e031e97da566643da75b9defd301e27e21449716b92bf3 Nov 25 09:27:19 crc kubenswrapper[4760]: I1125 09:27:19.606571 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerStarted","Data":"9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa"} Nov 25 09:27:19 crc kubenswrapper[4760]: I1125 09:27:19.606618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerStarted","Data":"873882949a0a9b85f9e031e97da566643da75b9defd301e27e21449716b92bf3"} Nov 25 09:27:19 crc kubenswrapper[4760]: I1125 09:27:19.609067 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6e0d32d-404f-4102-b55d-39e181169a00" containerID="fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4" exitCode=0 Nov 25 09:27:19 crc kubenswrapper[4760]: I1125 09:27:19.609195 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2vqv" event={"ID":"f6e0d32d-404f-4102-b55d-39e181169a00","Type":"ContainerDied","Data":"fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4"} Nov 25 09:27:20 crc kubenswrapper[4760]: I1125 09:27:20.618917 4760 generic.go:334] "Generic (PLEG): container finished" podID="ea480579-5303-4162-ba18-434fc0a5b847" containerID="9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa" exitCode=0 Nov 25 09:27:20 crc kubenswrapper[4760]: I1125 09:27:20.619123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerDied","Data":"9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa"} Nov 25 09:27:20 crc kubenswrapper[4760]: I1125 09:27:20.624197 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2vqv" event={"ID":"f6e0d32d-404f-4102-b55d-39e181169a00","Type":"ContainerStarted","Data":"81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6"} Nov 25 09:27:20 crc kubenswrapper[4760]: I1125 09:27:20.670151 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x2vqv" podStartSLOduration=2.191497435 podStartE2EDuration="4.670134207s" podCreationTimestamp="2025-11-25 09:27:16 +0000 UTC" firstStartedPulling="2025-11-25 09:27:17.584728016 +0000 UTC m=+4571.293758811" lastFinishedPulling="2025-11-25 09:27:20.063364788 +0000 UTC m=+4573.772395583" observedRunningTime="2025-11-25 09:27:20.666142873 +0000 UTC m=+4574.375173668" watchObservedRunningTime="2025-11-25 09:27:20.670134207 +0000 UTC m=+4574.379165002" Nov 25 09:27:21 crc kubenswrapper[4760]: I1125 09:27:21.637410 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerStarted","Data":"1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab"} Nov 25 09:27:23 crc kubenswrapper[4760]: I1125 09:27:23.656096 4760 generic.go:334] "Generic (PLEG): container finished" podID="ea480579-5303-4162-ba18-434fc0a5b847" containerID="1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab" exitCode=0 Nov 25 09:27:23 crc kubenswrapper[4760]: I1125 09:27:23.656394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerDied","Data":"1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab"} Nov 25 09:27:24 crc kubenswrapper[4760]: I1125 09:27:24.687095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerStarted","Data":"70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d"} Nov 25 09:27:24 crc kubenswrapper[4760]: I1125 09:27:24.715968 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bhb6q" podStartSLOduration=3.1984832340000002 podStartE2EDuration="6.715949243s" podCreationTimestamp="2025-11-25 09:27:18 +0000 UTC" firstStartedPulling="2025-11-25 09:27:20.620724621 +0000 UTC m=+4574.329755426" lastFinishedPulling="2025-11-25 09:27:24.13819064 +0000 UTC m=+4577.847221435" observedRunningTime="2025-11-25 09:27:24.709104838 +0000 UTC m=+4578.418135633" watchObservedRunningTime="2025-11-25 09:27:24.715949243 +0000 UTC m=+4578.424980038" Nov 25 09:27:26 crc kubenswrapper[4760]: I1125 09:27:26.415163 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:26 crc kubenswrapper[4760]: I1125 09:27:26.415712 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:27 crc kubenswrapper[4760]: I1125 09:27:27.489782 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-x2vqv" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="registry-server" probeResult="failure" output=< Nov 25 09:27:27 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:27:27 crc kubenswrapper[4760]: > Nov 25 09:27:28 crc kubenswrapper[4760]: I1125 09:27:28.672363 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:28 crc kubenswrapper[4760]: I1125 09:27:28.672708 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:29 crc kubenswrapper[4760]: I1125 09:27:29.723415 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bhb6q" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="registry-server" probeResult="failure" output=< Nov 25 09:27:29 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:27:29 crc kubenswrapper[4760]: > Nov 25 09:27:31 crc kubenswrapper[4760]: I1125 09:27:31.947230 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:27:31 crc kubenswrapper[4760]: E1125 09:27:31.948561 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:27:36 crc kubenswrapper[4760]: I1125 09:27:36.468635 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:36 crc kubenswrapper[4760]: I1125 09:27:36.519372 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:36 crc kubenswrapper[4760]: I1125 09:27:36.708869 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2vqv"] Nov 25 09:27:37 crc kubenswrapper[4760]: I1125 09:27:37.805244 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x2vqv" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="registry-server" containerID="cri-o://81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6" gracePeriod=2 Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.459307 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.658909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcr8q\" (UniqueName: \"kubernetes.io/projected/f6e0d32d-404f-4102-b55d-39e181169a00-kube-api-access-gcr8q\") pod \"f6e0d32d-404f-4102-b55d-39e181169a00\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.659102 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-catalog-content\") pod \"f6e0d32d-404f-4102-b55d-39e181169a00\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.659187 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-utilities\") pod \"f6e0d32d-404f-4102-b55d-39e181169a00\" (UID: \"f6e0d32d-404f-4102-b55d-39e181169a00\") " Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.660103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-utilities" (OuterVolumeSpecName: "utilities") pod "f6e0d32d-404f-4102-b55d-39e181169a00" (UID: "f6e0d32d-404f-4102-b55d-39e181169a00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.664008 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e0d32d-404f-4102-b55d-39e181169a00-kube-api-access-gcr8q" (OuterVolumeSpecName: "kube-api-access-gcr8q") pod "f6e0d32d-404f-4102-b55d-39e181169a00" (UID: "f6e0d32d-404f-4102-b55d-39e181169a00"). InnerVolumeSpecName "kube-api-access-gcr8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.699808 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6e0d32d-404f-4102-b55d-39e181169a00" (UID: "f6e0d32d-404f-4102-b55d-39e181169a00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.730529 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.761340 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.761727 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcr8q\" (UniqueName: \"kubernetes.io/projected/f6e0d32d-404f-4102-b55d-39e181169a00-kube-api-access-gcr8q\") on node \"crc\" DevicePath \"\"" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.761841 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e0d32d-404f-4102-b55d-39e181169a00-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.782772 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.815825 4760 generic.go:334] "Generic (PLEG): container finished" podID="f6e0d32d-404f-4102-b55d-39e181169a00" containerID="81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6" exitCode=0 Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.815879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2vqv" event={"ID":"f6e0d32d-404f-4102-b55d-39e181169a00","Type":"ContainerDied","Data":"81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6"} Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.815903 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2vqv" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.815941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2vqv" event={"ID":"f6e0d32d-404f-4102-b55d-39e181169a00","Type":"ContainerDied","Data":"1c07e8ee36f68611ddbe837fdff81c48557275d82b353716c81715ec40a07104"} Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.815962 4760 scope.go:117] "RemoveContainer" containerID="81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.843198 4760 scope.go:117] "RemoveContainer" containerID="fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.861426 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2vqv"] Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.876386 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x2vqv"] Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.879576 4760 scope.go:117] "RemoveContainer" containerID="ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.912736 4760 scope.go:117] "RemoveContainer" containerID="81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6" Nov 25 09:27:38 crc kubenswrapper[4760]: E1125 09:27:38.913273 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6\": container with ID starting with 81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6 not found: ID does not exist" containerID="81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.913317 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6"} err="failed to get container status \"81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6\": rpc error: code = NotFound desc = could not find container \"81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6\": container with ID starting with 81c21bb00b43b428c18154d77adc7c0cce09bf9fb556df3fb770c9012e7968b6 not found: ID does not exist" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.913341 4760 scope.go:117] "RemoveContainer" containerID="fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4" Nov 25 09:27:38 crc kubenswrapper[4760]: E1125 09:27:38.913743 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4\": container with ID starting with fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4 not found: ID does not exist" containerID="fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.913797 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4"} err="failed to get container status \"fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4\": rpc error: code = NotFound desc = could not find container \"fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4\": container with ID starting with fad851f62652e6c57a236421d57cfb66c98d8f8032168e879771dbdb771810e4 not found: ID does not exist" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.913834 4760 scope.go:117] "RemoveContainer" containerID="ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81" Nov 25 09:27:38 crc kubenswrapper[4760]: E1125 09:27:38.914283 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81\": container with ID starting with ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81 not found: ID does not exist" containerID="ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.914316 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81"} err="failed to get container status \"ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81\": rpc error: code = NotFound desc = could not find container \"ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81\": container with ID starting with ff07c9e84932c64379d1f472af9b273483b970276dea7a7fe54e7ae7a977eb81 not found: ID does not exist" Nov 25 09:27:38 crc kubenswrapper[4760]: I1125 09:27:38.951219 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" path="/var/lib/kubelet/pods/f6e0d32d-404f-4102-b55d-39e181169a00/volumes" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.103840 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhb6q"] Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.104404 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bhb6q" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="registry-server" containerID="cri-o://70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d" gracePeriod=2 Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.820600 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.834467 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq878\" (UniqueName: \"kubernetes.io/projected/ea480579-5303-4162-ba18-434fc0a5b847-kube-api-access-mq878\") pod \"ea480579-5303-4162-ba18-434fc0a5b847\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.834515 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-utilities\") pod \"ea480579-5303-4162-ba18-434fc0a5b847\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.834585 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-catalog-content\") pod \"ea480579-5303-4162-ba18-434fc0a5b847\" (UID: \"ea480579-5303-4162-ba18-434fc0a5b847\") " Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.835539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-utilities" (OuterVolumeSpecName: "utilities") pod "ea480579-5303-4162-ba18-434fc0a5b847" (UID: "ea480579-5303-4162-ba18-434fc0a5b847"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.848819 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea480579-5303-4162-ba18-434fc0a5b847-kube-api-access-mq878" (OuterVolumeSpecName: "kube-api-access-mq878") pod "ea480579-5303-4162-ba18-434fc0a5b847" (UID: "ea480579-5303-4162-ba18-434fc0a5b847"). InnerVolumeSpecName "kube-api-access-mq878". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.875171 4760 generic.go:334] "Generic (PLEG): container finished" podID="ea480579-5303-4162-ba18-434fc0a5b847" containerID="70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d" exitCode=0 Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.875226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerDied","Data":"70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d"} Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.875274 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhb6q" event={"ID":"ea480579-5303-4162-ba18-434fc0a5b847","Type":"ContainerDied","Data":"873882949a0a9b85f9e031e97da566643da75b9defd301e27e21449716b92bf3"} Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.875296 4760 scope.go:117] "RemoveContainer" containerID="70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.875416 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhb6q" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.901655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea480579-5303-4162-ba18-434fc0a5b847" (UID: "ea480579-5303-4162-ba18-434fc0a5b847"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.915874 4760 scope.go:117] "RemoveContainer" containerID="1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.938475 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq878\" (UniqueName: \"kubernetes.io/projected/ea480579-5303-4162-ba18-434fc0a5b847-kube-api-access-mq878\") on node \"crc\" DevicePath \"\"" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.938513 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.938523 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea480579-5303-4162-ba18-434fc0a5b847-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.958674 4760 scope.go:117] "RemoveContainer" containerID="9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.984750 4760 scope.go:117] "RemoveContainer" containerID="70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d" Nov 25 09:27:41 crc kubenswrapper[4760]: E1125 09:27:41.985236 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d\": container with ID starting with 70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d not found: ID does not exist" containerID="70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.985287 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d"} err="failed to get container status \"70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d\": rpc error: code = NotFound desc = could not find container \"70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d\": container with ID starting with 70a1398448241650750502e703637ad2988083d67fc061a3c570018c0d224c7d not found: ID does not exist" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.985314 4760 scope.go:117] "RemoveContainer" containerID="1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab" Nov 25 09:27:41 crc kubenswrapper[4760]: E1125 09:27:41.985691 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab\": container with ID starting with 1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab not found: ID does not exist" containerID="1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.985746 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab"} err="failed to get container status \"1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab\": rpc error: code = NotFound desc = could not find container \"1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab\": container with ID starting with 1a721db33bb0fcb2757e80a588e8a229db47bc88b15ebb6fed25d8a0d4f071ab not found: ID does not exist" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.985782 4760 scope.go:117] "RemoveContainer" containerID="9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa" Nov 25 09:27:41 crc kubenswrapper[4760]: E1125 09:27:41.986195 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa\": container with ID starting with 9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa not found: ID does not exist" containerID="9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa" Nov 25 09:27:41 crc kubenswrapper[4760]: I1125 09:27:41.986232 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa"} err="failed to get container status \"9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa\": rpc error: code = NotFound desc = could not find container \"9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa\": container with ID starting with 9e69104c8766b3f3a6892914c9d14956ad99c8b2febbd448021e4e94e301edfa not found: ID does not exist" Nov 25 09:27:42 crc kubenswrapper[4760]: I1125 09:27:42.209816 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhb6q"] Nov 25 09:27:42 crc kubenswrapper[4760]: I1125 09:27:42.219964 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bhb6q"] Nov 25 09:27:42 crc kubenswrapper[4760]: I1125 09:27:42.950869 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea480579-5303-4162-ba18-434fc0a5b847" path="/var/lib/kubelet/pods/ea480579-5303-4162-ba18-434fc0a5b847/volumes" Nov 25 09:27:43 crc kubenswrapper[4760]: I1125 09:27:43.938978 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:27:43 crc kubenswrapper[4760]: E1125 09:27:43.939682 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:27:57 crc kubenswrapper[4760]: I1125 09:27:57.940157 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:27:57 crc kubenswrapper[4760]: E1125 09:27:57.941067 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:28:12 crc kubenswrapper[4760]: I1125 09:28:12.938918 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:28:12 crc kubenswrapper[4760]: E1125 09:28:12.939958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:28:23 crc kubenswrapper[4760]: I1125 09:28:23.937951 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:28:23 crc kubenswrapper[4760]: E1125 09:28:23.938833 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:28:37 crc kubenswrapper[4760]: I1125 09:28:37.939150 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:28:37 crc kubenswrapper[4760]: E1125 09:28:37.939815 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:28:48 crc kubenswrapper[4760]: I1125 09:28:48.939353 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:28:48 crc kubenswrapper[4760]: E1125 09:28:48.940137 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:29:00 crc kubenswrapper[4760]: I1125 09:29:00.938754 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:29:00 crc kubenswrapper[4760]: E1125 09:29:00.939686 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:29:15 crc kubenswrapper[4760]: I1125 09:29:15.939189 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:29:15 crc kubenswrapper[4760]: E1125 09:29:15.940064 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:29:27 crc kubenswrapper[4760]: I1125 09:29:27.938288 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:29:27 crc kubenswrapper[4760]: E1125 09:29:27.938909 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.918230 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n9jtn"] Nov 25 09:29:29 crc kubenswrapper[4760]: E1125 09:29:29.919062 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="extract-utilities" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919080 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="extract-utilities" Nov 25 09:29:29 crc kubenswrapper[4760]: E1125 09:29:29.919104 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="extract-utilities" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919114 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="extract-utilities" Nov 25 09:29:29 crc kubenswrapper[4760]: E1125 09:29:29.919140 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="registry-server" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919149 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="registry-server" Nov 25 09:29:29 crc kubenswrapper[4760]: E1125 09:29:29.919165 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="registry-server" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919173 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="registry-server" Nov 25 09:29:29 crc kubenswrapper[4760]: E1125 09:29:29.919190 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="extract-content" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919198 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="extract-content" Nov 25 09:29:29 crc kubenswrapper[4760]: E1125 09:29:29.919229 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="extract-content" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919237 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="extract-content" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919493 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e0d32d-404f-4102-b55d-39e181169a00" containerName="registry-server" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.919517 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea480579-5303-4162-ba18-434fc0a5b847" containerName="registry-server" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.921329 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:29 crc kubenswrapper[4760]: I1125 09:29:29.936639 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9jtn"] Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.105975 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-utilities\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.106097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/ae74171e-0c17-4644-bfed-3a766547aa63-kube-api-access-hn96b\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.106534 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-catalog-content\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.208862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-utilities\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.208943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/ae74171e-0c17-4644-bfed-3a766547aa63-kube-api-access-hn96b\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.209029 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-catalog-content\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.209472 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-utilities\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.209507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-catalog-content\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.236337 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/ae74171e-0c17-4644-bfed-3a766547aa63-kube-api-access-hn96b\") pod \"redhat-marketplace-n9jtn\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.262078 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.754731 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9jtn"] Nov 25 09:29:30 crc kubenswrapper[4760]: I1125 09:29:30.847329 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9jtn" event={"ID":"ae74171e-0c17-4644-bfed-3a766547aa63","Type":"ContainerStarted","Data":"d44e2dc7d1d66f809745ee489d59e8176ea7fd605505aa953bc1eaa7315f74cb"} Nov 25 09:29:31 crc kubenswrapper[4760]: I1125 09:29:31.858671 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae74171e-0c17-4644-bfed-3a766547aa63" containerID="224ce11a7e8d68089605db7800e64db32c6ebeab0c2705f0b301d11fba1e8e63" exitCode=0 Nov 25 09:29:31 crc kubenswrapper[4760]: I1125 09:29:31.858899 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9jtn" event={"ID":"ae74171e-0c17-4644-bfed-3a766547aa63","Type":"ContainerDied","Data":"224ce11a7e8d68089605db7800e64db32c6ebeab0c2705f0b301d11fba1e8e63"} Nov 25 09:29:34 crc kubenswrapper[4760]: I1125 09:29:34.887509 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae74171e-0c17-4644-bfed-3a766547aa63" containerID="32150edd2114fd9dd3c72ee540050ee2d93c52090a49f091799cbcb78e4d4b28" exitCode=0 Nov 25 09:29:34 crc kubenswrapper[4760]: I1125 09:29:34.887639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9jtn" event={"ID":"ae74171e-0c17-4644-bfed-3a766547aa63","Type":"ContainerDied","Data":"32150edd2114fd9dd3c72ee540050ee2d93c52090a49f091799cbcb78e4d4b28"} Nov 25 09:29:34 crc kubenswrapper[4760]: I1125 09:29:34.889802 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:29:36 crc kubenswrapper[4760]: I1125 09:29:36.908992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9jtn" event={"ID":"ae74171e-0c17-4644-bfed-3a766547aa63","Type":"ContainerStarted","Data":"f7b3452f69e48a53aed77031d1ca80d95102bc8237d4f7165e29f5b6c6e37d53"} Nov 25 09:29:36 crc kubenswrapper[4760]: I1125 09:29:36.931280 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n9jtn" podStartSLOduration=4.034058109 podStartE2EDuration="7.931235119s" podCreationTimestamp="2025-11-25 09:29:29 +0000 UTC" firstStartedPulling="2025-11-25 09:29:31.862869087 +0000 UTC m=+4705.571899882" lastFinishedPulling="2025-11-25 09:29:35.760046097 +0000 UTC m=+4709.469076892" observedRunningTime="2025-11-25 09:29:36.928686887 +0000 UTC m=+4710.637717692" watchObservedRunningTime="2025-11-25 09:29:36.931235119 +0000 UTC m=+4710.640265914" Nov 25 09:29:40 crc kubenswrapper[4760]: I1125 09:29:40.263000 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:40 crc kubenswrapper[4760]: I1125 09:29:40.263358 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:40 crc kubenswrapper[4760]: I1125 09:29:40.316681 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:41 crc kubenswrapper[4760]: I1125 09:29:41.010932 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:41 crc kubenswrapper[4760]: I1125 09:29:41.066836 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9jtn"] Nov 25 09:29:42 crc kubenswrapper[4760]: I1125 09:29:42.938185 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:29:42 crc kubenswrapper[4760]: E1125 09:29:42.940074 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:29:42 crc kubenswrapper[4760]: I1125 09:29:42.962573 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n9jtn" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="registry-server" containerID="cri-o://f7b3452f69e48a53aed77031d1ca80d95102bc8237d4f7165e29f5b6c6e37d53" gracePeriod=2 Nov 25 09:29:43 crc kubenswrapper[4760]: I1125 09:29:43.975404 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae74171e-0c17-4644-bfed-3a766547aa63" containerID="f7b3452f69e48a53aed77031d1ca80d95102bc8237d4f7165e29f5b6c6e37d53" exitCode=0 Nov 25 09:29:43 crc kubenswrapper[4760]: I1125 09:29:43.975726 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9jtn" event={"ID":"ae74171e-0c17-4644-bfed-3a766547aa63","Type":"ContainerDied","Data":"f7b3452f69e48a53aed77031d1ca80d95102bc8237d4f7165e29f5b6c6e37d53"} Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.367685 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.485172 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-utilities\") pod \"ae74171e-0c17-4644-bfed-3a766547aa63\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.485220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-catalog-content\") pod \"ae74171e-0c17-4644-bfed-3a766547aa63\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.485331 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/ae74171e-0c17-4644-bfed-3a766547aa63-kube-api-access-hn96b\") pod \"ae74171e-0c17-4644-bfed-3a766547aa63\" (UID: \"ae74171e-0c17-4644-bfed-3a766547aa63\") " Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.486234 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-utilities" (OuterVolumeSpecName: "utilities") pod "ae74171e-0c17-4644-bfed-3a766547aa63" (UID: "ae74171e-0c17-4644-bfed-3a766547aa63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.491117 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae74171e-0c17-4644-bfed-3a766547aa63-kube-api-access-hn96b" (OuterVolumeSpecName: "kube-api-access-hn96b") pod "ae74171e-0c17-4644-bfed-3a766547aa63" (UID: "ae74171e-0c17-4644-bfed-3a766547aa63"). InnerVolumeSpecName "kube-api-access-hn96b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.506954 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae74171e-0c17-4644-bfed-3a766547aa63" (UID: "ae74171e-0c17-4644-bfed-3a766547aa63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.587421 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.587462 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae74171e-0c17-4644-bfed-3a766547aa63-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.587475 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn96b\" (UniqueName: \"kubernetes.io/projected/ae74171e-0c17-4644-bfed-3a766547aa63-kube-api-access-hn96b\") on node \"crc\" DevicePath \"\"" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.987439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9jtn" event={"ID":"ae74171e-0c17-4644-bfed-3a766547aa63","Type":"ContainerDied","Data":"d44e2dc7d1d66f809745ee489d59e8176ea7fd605505aa953bc1eaa7315f74cb"} Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.987528 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9jtn" Nov 25 09:29:44 crc kubenswrapper[4760]: I1125 09:29:44.987816 4760 scope.go:117] "RemoveContainer" containerID="f7b3452f69e48a53aed77031d1ca80d95102bc8237d4f7165e29f5b6c6e37d53" Nov 25 09:29:45 crc kubenswrapper[4760]: I1125 09:29:45.012755 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9jtn"] Nov 25 09:29:45 crc kubenswrapper[4760]: I1125 09:29:45.020199 4760 scope.go:117] "RemoveContainer" containerID="32150edd2114fd9dd3c72ee540050ee2d93c52090a49f091799cbcb78e4d4b28" Nov 25 09:29:45 crc kubenswrapper[4760]: I1125 09:29:45.025338 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9jtn"] Nov 25 09:29:45 crc kubenswrapper[4760]: I1125 09:29:45.269184 4760 scope.go:117] "RemoveContainer" containerID="224ce11a7e8d68089605db7800e64db32c6ebeab0c2705f0b301d11fba1e8e63" Nov 25 09:29:46 crc kubenswrapper[4760]: I1125 09:29:46.951425 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" path="/var/lib/kubelet/pods/ae74171e-0c17-4644-bfed-3a766547aa63/volumes" Nov 25 09:29:56 crc kubenswrapper[4760]: I1125 09:29:56.945792 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:29:56 crc kubenswrapper[4760]: E1125 09:29:56.946735 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.154628 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2"] Nov 25 09:30:00 crc kubenswrapper[4760]: E1125 09:30:00.155685 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="registry-server" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.155702 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="registry-server" Nov 25 09:30:00 crc kubenswrapper[4760]: E1125 09:30:00.155734 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="extract-content" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.155740 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="extract-content" Nov 25 09:30:00 crc kubenswrapper[4760]: E1125 09:30:00.155754 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="extract-utilities" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.155761 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="extract-utilities" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.155994 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae74171e-0c17-4644-bfed-3a766547aa63" containerName="registry-server" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.157094 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.160778 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.165002 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.165758 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2"] Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.284343 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-config-volume\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.284399 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrss\" (UniqueName: \"kubernetes.io/projected/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-kube-api-access-9jrss\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.285708 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-secret-volume\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.387687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-config-volume\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.387745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jrss\" (UniqueName: \"kubernetes.io/projected/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-kube-api-access-9jrss\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.387779 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-secret-volume\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.388656 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-config-volume\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.396276 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-secret-volume\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.416765 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jrss\" (UniqueName: \"kubernetes.io/projected/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-kube-api-access-9jrss\") pod \"collect-profiles-29401050-s7vw2\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.482820 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:00 crc kubenswrapper[4760]: I1125 09:30:00.956771 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2"] Nov 25 09:30:01 crc kubenswrapper[4760]: I1125 09:30:01.144902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" event={"ID":"47d1e8dd-a7c0-447d-82c2-c3382cd582e9","Type":"ContainerStarted","Data":"8b52dedb7ceb50f33366c40b4a825bd32ece4e5e78f41132ea64657e3f8e9041"} Nov 25 09:30:01 crc kubenswrapper[4760]: I1125 09:30:01.144954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" event={"ID":"47d1e8dd-a7c0-447d-82c2-c3382cd582e9","Type":"ContainerStarted","Data":"bc990ba71eff80b67a6f2f68926670f2a0eea9257e5a2adad984fdb873a3290f"} Nov 25 09:30:01 crc kubenswrapper[4760]: I1125 09:30:01.169321 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" podStartSLOduration=1.169301937 podStartE2EDuration="1.169301937s" podCreationTimestamp="2025-11-25 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:30:01.16199322 +0000 UTC m=+4734.871024035" watchObservedRunningTime="2025-11-25 09:30:01.169301937 +0000 UTC m=+4734.878332732" Nov 25 09:30:02 crc kubenswrapper[4760]: I1125 09:30:02.156019 4760 generic.go:334] "Generic (PLEG): container finished" podID="47d1e8dd-a7c0-447d-82c2-c3382cd582e9" containerID="8b52dedb7ceb50f33366c40b4a825bd32ece4e5e78f41132ea64657e3f8e9041" exitCode=0 Nov 25 09:30:02 crc kubenswrapper[4760]: I1125 09:30:02.156274 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" event={"ID":"47d1e8dd-a7c0-447d-82c2-c3382cd582e9","Type":"ContainerDied","Data":"8b52dedb7ceb50f33366c40b4a825bd32ece4e5e78f41132ea64657e3f8e9041"} Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.579514 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.765915 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jrss\" (UniqueName: \"kubernetes.io/projected/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-kube-api-access-9jrss\") pod \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.766087 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-secret-volume\") pod \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.766140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-config-volume\") pod \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\" (UID: \"47d1e8dd-a7c0-447d-82c2-c3382cd582e9\") " Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.766989 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "47d1e8dd-a7c0-447d-82c2-c3382cd582e9" (UID: "47d1e8dd-a7c0-447d-82c2-c3382cd582e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.771965 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "47d1e8dd-a7c0-447d-82c2-c3382cd582e9" (UID: "47d1e8dd-a7c0-447d-82c2-c3382cd582e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.772018 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-kube-api-access-9jrss" (OuterVolumeSpecName: "kube-api-access-9jrss") pod "47d1e8dd-a7c0-447d-82c2-c3382cd582e9" (UID: "47d1e8dd-a7c0-447d-82c2-c3382cd582e9"). InnerVolumeSpecName "kube-api-access-9jrss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.869635 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jrss\" (UniqueName: \"kubernetes.io/projected/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-kube-api-access-9jrss\") on node \"crc\" DevicePath \"\"" Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.869676 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:30:03 crc kubenswrapper[4760]: I1125 09:30:03.869689 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d1e8dd-a7c0-447d-82c2-c3382cd582e9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:30:04 crc kubenswrapper[4760]: I1125 09:30:04.175180 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" event={"ID":"47d1e8dd-a7c0-447d-82c2-c3382cd582e9","Type":"ContainerDied","Data":"bc990ba71eff80b67a6f2f68926670f2a0eea9257e5a2adad984fdb873a3290f"} Nov 25 09:30:04 crc kubenswrapper[4760]: I1125 09:30:04.175545 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc990ba71eff80b67a6f2f68926670f2a0eea9257e5a2adad984fdb873a3290f" Nov 25 09:30:04 crc kubenswrapper[4760]: I1125 09:30:04.175317 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2" Nov 25 09:30:04 crc kubenswrapper[4760]: I1125 09:30:04.243473 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw"] Nov 25 09:30:04 crc kubenswrapper[4760]: I1125 09:30:04.255937 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-msvbw"] Nov 25 09:30:04 crc kubenswrapper[4760]: I1125 09:30:04.949014 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8783b12f-890c-429f-9193-2c8e5d6ce684" path="/var/lib/kubelet/pods/8783b12f-890c-429f-9193-2c8e5d6ce684/volumes" Nov 25 09:30:09 crc kubenswrapper[4760]: I1125 09:30:09.938888 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:30:09 crc kubenswrapper[4760]: E1125 09:30:09.939750 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:30:24 crc kubenswrapper[4760]: I1125 09:30:24.939172 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:30:24 crc kubenswrapper[4760]: E1125 09:30:24.941391 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:30:34 crc kubenswrapper[4760]: I1125 09:30:34.923875 4760 scope.go:117] "RemoveContainer" containerID="f3f34d4a4469b7c3809f78af104d70eeacb04996a30f0b5056ba2156768c2936" Nov 25 09:30:35 crc kubenswrapper[4760]: I1125 09:30:35.939243 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:30:35 crc kubenswrapper[4760]: E1125 09:30:35.939525 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:30:48 crc kubenswrapper[4760]: I1125 09:30:48.939070 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:30:48 crc kubenswrapper[4760]: E1125 09:30:48.939927 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:30:59 crc kubenswrapper[4760]: I1125 09:30:59.938490 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:30:59 crc kubenswrapper[4760]: E1125 09:30:59.939147 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:31:13 crc kubenswrapper[4760]: I1125 09:31:13.938213 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:31:13 crc kubenswrapper[4760]: E1125 09:31:13.938969 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:31:24 crc kubenswrapper[4760]: I1125 09:31:24.938146 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:31:24 crc kubenswrapper[4760]: E1125 09:31:24.938926 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:31:39 crc kubenswrapper[4760]: I1125 09:31:39.938673 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:31:39 crc kubenswrapper[4760]: E1125 09:31:39.939490 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:31:53 crc kubenswrapper[4760]: I1125 09:31:53.938768 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:31:53 crc kubenswrapper[4760]: E1125 09:31:53.939673 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:32:04 crc kubenswrapper[4760]: I1125 09:32:04.948078 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:32:06 crc kubenswrapper[4760]: I1125 09:32:06.242636 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"16ae97d53e104ac7f398473717e19e54ef55eb617f217ae5f5a8a4fb70e12945"} Nov 25 09:34:31 crc kubenswrapper[4760]: I1125 09:34:31.745955 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:34:31 crc kubenswrapper[4760]: I1125 09:34:31.746584 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:35:01 crc kubenswrapper[4760]: I1125 09:35:01.746437 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:35:01 crc kubenswrapper[4760]: I1125 09:35:01.746966 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:35:31 crc kubenswrapper[4760]: I1125 09:35:31.746077 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:35:31 crc kubenswrapper[4760]: I1125 09:35:31.746703 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:35:31 crc kubenswrapper[4760]: I1125 09:35:31.746758 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:35:31 crc kubenswrapper[4760]: I1125 09:35:31.747364 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16ae97d53e104ac7f398473717e19e54ef55eb617f217ae5f5a8a4fb70e12945"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:35:31 crc kubenswrapper[4760]: I1125 09:35:31.747424 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://16ae97d53e104ac7f398473717e19e54ef55eb617f217ae5f5a8a4fb70e12945" gracePeriod=600 Nov 25 09:35:32 crc kubenswrapper[4760]: I1125 09:35:32.141311 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="16ae97d53e104ac7f398473717e19e54ef55eb617f217ae5f5a8a4fb70e12945" exitCode=0 Nov 25 09:35:32 crc kubenswrapper[4760]: I1125 09:35:32.141366 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"16ae97d53e104ac7f398473717e19e54ef55eb617f217ae5f5a8a4fb70e12945"} Nov 25 09:35:32 crc kubenswrapper[4760]: I1125 09:35:32.141628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77"} Nov 25 09:35:32 crc kubenswrapper[4760]: I1125 09:35:32.141696 4760 scope.go:117] "RemoveContainer" containerID="e8b953e0fd574133b1c4cbdef019e0ac25895a7f437a4923370d32d1fd1c80b9" Nov 25 09:38:01 crc kubenswrapper[4760]: I1125 09:38:01.746696 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:38:01 crc kubenswrapper[4760]: I1125 09:38:01.747277 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.935491 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8gvsd"] Nov 25 09:38:20 crc kubenswrapper[4760]: E1125 09:38:20.936551 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d1e8dd-a7c0-447d-82c2-c3382cd582e9" containerName="collect-profiles" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.936568 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d1e8dd-a7c0-447d-82c2-c3382cd582e9" containerName="collect-profiles" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.936823 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d1e8dd-a7c0-447d-82c2-c3382cd582e9" containerName="collect-profiles" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.938927 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.959670 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-catalog-content\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.959882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-utilities\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.959944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x22xq\" (UniqueName: \"kubernetes.io/projected/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-kube-api-access-x22xq\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:20 crc kubenswrapper[4760]: I1125 09:38:20.965770 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gvsd"] Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.061737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-catalog-content\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.061846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-utilities\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.061874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22xq\" (UniqueName: \"kubernetes.io/projected/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-kube-api-access-x22xq\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.062555 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-utilities\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.062612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-catalog-content\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.083473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22xq\" (UniqueName: \"kubernetes.io/projected/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-kube-api-access-x22xq\") pod \"community-operators-8gvsd\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.271706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.892894 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gvsd"] Nov 25 09:38:21 crc kubenswrapper[4760]: I1125 09:38:21.909607 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gvsd" event={"ID":"b1e2eee2-a6c8-4991-86f3-03d10c8902e1","Type":"ContainerStarted","Data":"55862793c7fd080e9cf5b4d87d5b02ba49c9fa2ffef04c3f7426e333c6c711d5"} Nov 25 09:38:22 crc kubenswrapper[4760]: I1125 09:38:22.920102 4760 generic.go:334] "Generic (PLEG): container finished" podID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerID="e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a" exitCode=0 Nov 25 09:38:22 crc kubenswrapper[4760]: I1125 09:38:22.920147 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gvsd" event={"ID":"b1e2eee2-a6c8-4991-86f3-03d10c8902e1","Type":"ContainerDied","Data":"e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a"} Nov 25 09:38:22 crc kubenswrapper[4760]: I1125 09:38:22.922810 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:38:23 crc kubenswrapper[4760]: I1125 09:38:23.934080 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gvsd" event={"ID":"b1e2eee2-a6c8-4991-86f3-03d10c8902e1","Type":"ContainerStarted","Data":"bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c"} Nov 25 09:38:27 crc kubenswrapper[4760]: I1125 09:38:27.967303 4760 generic.go:334] "Generic (PLEG): container finished" podID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerID="bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c" exitCode=0 Nov 25 09:38:27 crc kubenswrapper[4760]: I1125 09:38:27.967410 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gvsd" event={"ID":"b1e2eee2-a6c8-4991-86f3-03d10c8902e1","Type":"ContainerDied","Data":"bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c"} Nov 25 09:38:28 crc kubenswrapper[4760]: I1125 09:38:28.985643 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gvsd" event={"ID":"b1e2eee2-a6c8-4991-86f3-03d10c8902e1","Type":"ContainerStarted","Data":"0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1"} Nov 25 09:38:29 crc kubenswrapper[4760]: I1125 09:38:29.020542 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8gvsd" podStartSLOduration=3.487222165 podStartE2EDuration="9.020520101s" podCreationTimestamp="2025-11-25 09:38:20 +0000 UTC" firstStartedPulling="2025-11-25 09:38:22.92242314 +0000 UTC m=+5236.631453945" lastFinishedPulling="2025-11-25 09:38:28.455721086 +0000 UTC m=+5242.164751881" observedRunningTime="2025-11-25 09:38:29.012821512 +0000 UTC m=+5242.721852317" watchObservedRunningTime="2025-11-25 09:38:29.020520101 +0000 UTC m=+5242.729550896" Nov 25 09:38:31 crc kubenswrapper[4760]: I1125 09:38:31.273019 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:31 crc kubenswrapper[4760]: I1125 09:38:31.273457 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:31 crc kubenswrapper[4760]: I1125 09:38:31.318201 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:31 crc kubenswrapper[4760]: I1125 09:38:31.745981 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:38:31 crc kubenswrapper[4760]: I1125 09:38:31.746044 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:38:33 crc kubenswrapper[4760]: I1125 09:38:33.087343 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:33 crc kubenswrapper[4760]: I1125 09:38:33.142034 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gvsd"] Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.038153 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8gvsd" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="registry-server" containerID="cri-o://0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1" gracePeriod=2 Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.607545 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.660693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x22xq\" (UniqueName: \"kubernetes.io/projected/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-kube-api-access-x22xq\") pod \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.660781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-utilities\") pod \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.660950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-catalog-content\") pod \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\" (UID: \"b1e2eee2-a6c8-4991-86f3-03d10c8902e1\") " Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.661740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-utilities" (OuterVolumeSpecName: "utilities") pod "b1e2eee2-a6c8-4991-86f3-03d10c8902e1" (UID: "b1e2eee2-a6c8-4991-86f3-03d10c8902e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.669382 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-kube-api-access-x22xq" (OuterVolumeSpecName: "kube-api-access-x22xq") pod "b1e2eee2-a6c8-4991-86f3-03d10c8902e1" (UID: "b1e2eee2-a6c8-4991-86f3-03d10c8902e1"). InnerVolumeSpecName "kube-api-access-x22xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.712673 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1e2eee2-a6c8-4991-86f3-03d10c8902e1" (UID: "b1e2eee2-a6c8-4991-86f3-03d10c8902e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.763452 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.763706 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x22xq\" (UniqueName: \"kubernetes.io/projected/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-kube-api-access-x22xq\") on node \"crc\" DevicePath \"\"" Nov 25 09:38:35 crc kubenswrapper[4760]: I1125 09:38:35.763800 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e2eee2-a6c8-4991-86f3-03d10c8902e1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.052387 4760 generic.go:334] "Generic (PLEG): container finished" podID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerID="0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1" exitCode=0 Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.052426 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gvsd" event={"ID":"b1e2eee2-a6c8-4991-86f3-03d10c8902e1","Type":"ContainerDied","Data":"0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1"} Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.052450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gvsd" event={"ID":"b1e2eee2-a6c8-4991-86f3-03d10c8902e1","Type":"ContainerDied","Data":"55862793c7fd080e9cf5b4d87d5b02ba49c9fa2ffef04c3f7426e333c6c711d5"} Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.052467 4760 scope.go:117] "RemoveContainer" containerID="0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.052463 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gvsd" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.073323 4760 scope.go:117] "RemoveContainer" containerID="bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.100055 4760 scope.go:117] "RemoveContainer" containerID="e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.116895 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gvsd"] Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.129818 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8gvsd"] Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.173671 4760 scope.go:117] "RemoveContainer" containerID="0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1" Nov 25 09:38:36 crc kubenswrapper[4760]: E1125 09:38:36.174434 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1\": container with ID starting with 0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1 not found: ID does not exist" containerID="0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.174486 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1"} err="failed to get container status \"0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1\": rpc error: code = NotFound desc = could not find container \"0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1\": container with ID starting with 0a8df48a387f88af989909e03307b0f667ae57610457b35eb1ef715056c105d1 not found: ID does not exist" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.174522 4760 scope.go:117] "RemoveContainer" containerID="bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c" Nov 25 09:38:36 crc kubenswrapper[4760]: E1125 09:38:36.175122 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c\": container with ID starting with bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c not found: ID does not exist" containerID="bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.175167 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c"} err="failed to get container status \"bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c\": rpc error: code = NotFound desc = could not find container \"bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c\": container with ID starting with bace08ff2fd395c259210739e3729d168bdabaf3297db997e625d2277a470b6c not found: ID does not exist" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.175194 4760 scope.go:117] "RemoveContainer" containerID="e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a" Nov 25 09:38:36 crc kubenswrapper[4760]: E1125 09:38:36.175489 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a\": container with ID starting with e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a not found: ID does not exist" containerID="e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.175523 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a"} err="failed to get container status \"e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a\": rpc error: code = NotFound desc = could not find container \"e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a\": container with ID starting with e517a3e7b8d04b5049bffddfca2a5a9877942c9a132ee8f991a2a67fdc996b4a not found: ID does not exist" Nov 25 09:38:36 crc kubenswrapper[4760]: I1125 09:38:36.948865 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" path="/var/lib/kubelet/pods/b1e2eee2-a6c8-4991-86f3-03d10c8902e1/volumes" Nov 25 09:39:01 crc kubenswrapper[4760]: I1125 09:39:01.746546 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:39:01 crc kubenswrapper[4760]: I1125 09:39:01.747281 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:39:01 crc kubenswrapper[4760]: I1125 09:39:01.747386 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:39:01 crc kubenswrapper[4760]: I1125 09:39:01.748444 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:39:01 crc kubenswrapper[4760]: I1125 09:39:01.748542 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" gracePeriod=600 Nov 25 09:39:01 crc kubenswrapper[4760]: E1125 09:39:01.888612 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:39:02 crc kubenswrapper[4760]: I1125 09:39:02.287487 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" exitCode=0 Nov 25 09:39:02 crc kubenswrapper[4760]: I1125 09:39:02.287543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77"} Nov 25 09:39:02 crc kubenswrapper[4760]: I1125 09:39:02.287583 4760 scope.go:117] "RemoveContainer" containerID="16ae97d53e104ac7f398473717e19e54ef55eb617f217ae5f5a8a4fb70e12945" Nov 25 09:39:02 crc kubenswrapper[4760]: I1125 09:39:02.288429 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:39:02 crc kubenswrapper[4760]: E1125 09:39:02.288761 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:39:13 crc kubenswrapper[4760]: I1125 09:39:13.938613 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:39:13 crc kubenswrapper[4760]: E1125 09:39:13.939448 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:39:25 crc kubenswrapper[4760]: I1125 09:39:25.938981 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:39:25 crc kubenswrapper[4760]: E1125 09:39:25.939733 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:39:36 crc kubenswrapper[4760]: I1125 09:39:36.950790 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:39:36 crc kubenswrapper[4760]: E1125 09:39:36.951631 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:39:47 crc kubenswrapper[4760]: I1125 09:39:47.938839 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:39:47 crc kubenswrapper[4760]: E1125 09:39:47.940983 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.110961 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-85k6c"] Nov 25 09:39:52 crc kubenswrapper[4760]: E1125 09:39:52.112003 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="registry-server" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.112020 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="registry-server" Nov 25 09:39:52 crc kubenswrapper[4760]: E1125 09:39:52.112044 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="extract-utilities" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.112052 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="extract-utilities" Nov 25 09:39:52 crc kubenswrapper[4760]: E1125 09:39:52.112062 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="extract-content" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.112072 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="extract-content" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.112445 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e2eee2-a6c8-4991-86f3-03d10c8902e1" containerName="registry-server" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.114126 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.128631 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85k6c"] Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.151215 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-catalog-content\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.151332 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-utilities\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.151382 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9fd2\" (UniqueName: \"kubernetes.io/projected/9ea6db83-4aa8-4e22-bcfb-390d0297456d-kube-api-access-f9fd2\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.253388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-utilities\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.253772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9fd2\" (UniqueName: \"kubernetes.io/projected/9ea6db83-4aa8-4e22-bcfb-390d0297456d-kube-api-access-f9fd2\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.254033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-catalog-content\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.253499 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-utilities\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.254507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-catalog-content\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.284912 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9fd2\" (UniqueName: \"kubernetes.io/projected/9ea6db83-4aa8-4e22-bcfb-390d0297456d-kube-api-access-f9fd2\") pod \"redhat-operators-85k6c\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.434367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.706793 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7zgcv"] Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.709505 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.717282 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zgcv"] Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.763171 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-utilities\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.763586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-catalog-content\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.763847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vhg\" (UniqueName: \"kubernetes.io/projected/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-kube-api-access-z4vhg\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.866498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vhg\" (UniqueName: \"kubernetes.io/projected/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-kube-api-access-z4vhg\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.866628 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-utilities\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.866694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-catalog-content\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.867320 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-catalog-content\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.867410 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-utilities\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.884917 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vhg\" (UniqueName: \"kubernetes.io/projected/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-kube-api-access-z4vhg\") pod \"certified-operators-7zgcv\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:52 crc kubenswrapper[4760]: I1125 09:39:52.959881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-85k6c"] Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.036710 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:39:53 crc kubenswrapper[4760]: W1125 09:39:53.535978 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748baa9a_b671_4f21_8b78_fa5b4c1bdcea.slice/crio-c3d13a777d604940729c950288327d4a7c52a9d054a183e6f3b7191d26783ce5 WatchSource:0}: Error finding container c3d13a777d604940729c950288327d4a7c52a9d054a183e6f3b7191d26783ce5: Status 404 returned error can't find the container with id c3d13a777d604940729c950288327d4a7c52a9d054a183e6f3b7191d26783ce5 Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.560047 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zgcv"] Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.771153 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerID="c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8" exitCode=0 Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.771213 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85k6c" event={"ID":"9ea6db83-4aa8-4e22-bcfb-390d0297456d","Type":"ContainerDied","Data":"c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8"} Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.771241 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85k6c" event={"ID":"9ea6db83-4aa8-4e22-bcfb-390d0297456d","Type":"ContainerStarted","Data":"ba2de55fa4fe1f1f9788972857b269a6cb70db5350e2812628e28c7f32fa2fe7"} Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.775725 4760 generic.go:334] "Generic (PLEG): container finished" podID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerID="19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb" exitCode=0 Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.775776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zgcv" event={"ID":"748baa9a-b671-4f21-8b78-fa5b4c1bdcea","Type":"ContainerDied","Data":"19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb"} Nov 25 09:39:53 crc kubenswrapper[4760]: I1125 09:39:53.775809 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zgcv" event={"ID":"748baa9a-b671-4f21-8b78-fa5b4c1bdcea","Type":"ContainerStarted","Data":"c3d13a777d604940729c950288327d4a7c52a9d054a183e6f3b7191d26783ce5"} Nov 25 09:39:54 crc kubenswrapper[4760]: I1125 09:39:54.785657 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zgcv" event={"ID":"748baa9a-b671-4f21-8b78-fa5b4c1bdcea","Type":"ContainerStarted","Data":"389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14"} Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.091307 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpnq8"] Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.093947 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.100635 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpnq8"] Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.120627 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c52qx\" (UniqueName: \"kubernetes.io/projected/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-kube-api-access-c52qx\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.120735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-catalog-content\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.120830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-utilities\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.222762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-catalog-content\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.222852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-utilities\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.222940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c52qx\" (UniqueName: \"kubernetes.io/projected/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-kube-api-access-c52qx\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.223692 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-catalog-content\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.223914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-utilities\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.254819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c52qx\" (UniqueName: \"kubernetes.io/projected/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-kube-api-access-c52qx\") pod \"redhat-marketplace-jpnq8\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.420275 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.795627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85k6c" event={"ID":"9ea6db83-4aa8-4e22-bcfb-390d0297456d","Type":"ContainerStarted","Data":"6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79"} Nov 25 09:39:55 crc kubenswrapper[4760]: I1125 09:39:55.907771 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpnq8"] Nov 25 09:39:55 crc kubenswrapper[4760]: W1125 09:39:55.923068 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b81967a_61eb_4ac4_954b_bcb3ccf69a94.slice/crio-08677396a4704257000ba088b71329069229d74e03cf5d2504a6c9887d3eb5f0 WatchSource:0}: Error finding container 08677396a4704257000ba088b71329069229d74e03cf5d2504a6c9887d3eb5f0: Status 404 returned error can't find the container with id 08677396a4704257000ba088b71329069229d74e03cf5d2504a6c9887d3eb5f0 Nov 25 09:39:56 crc kubenswrapper[4760]: I1125 09:39:56.809928 4760 generic.go:334] "Generic (PLEG): container finished" podID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerID="389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14" exitCode=0 Nov 25 09:39:56 crc kubenswrapper[4760]: I1125 09:39:56.810314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zgcv" event={"ID":"748baa9a-b671-4f21-8b78-fa5b4c1bdcea","Type":"ContainerDied","Data":"389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14"} Nov 25 09:39:56 crc kubenswrapper[4760]: I1125 09:39:56.817553 4760 generic.go:334] "Generic (PLEG): container finished" podID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerID="3cdc714e8b6a8ff8626d82cd613e21b1eeb003adcb2d5b616d36185679ff0347" exitCode=0 Nov 25 09:39:56 crc kubenswrapper[4760]: I1125 09:39:56.817654 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpnq8" event={"ID":"9b81967a-61eb-4ac4-954b-bcb3ccf69a94","Type":"ContainerDied","Data":"3cdc714e8b6a8ff8626d82cd613e21b1eeb003adcb2d5b616d36185679ff0347"} Nov 25 09:39:56 crc kubenswrapper[4760]: I1125 09:39:56.817748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpnq8" event={"ID":"9b81967a-61eb-4ac4-954b-bcb3ccf69a94","Type":"ContainerStarted","Data":"08677396a4704257000ba088b71329069229d74e03cf5d2504a6c9887d3eb5f0"} Nov 25 09:39:58 crc kubenswrapper[4760]: I1125 09:39:58.841858 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zgcv" event={"ID":"748baa9a-b671-4f21-8b78-fa5b4c1bdcea","Type":"ContainerStarted","Data":"a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2"} Nov 25 09:39:58 crc kubenswrapper[4760]: I1125 09:39:58.845043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpnq8" event={"ID":"9b81967a-61eb-4ac4-954b-bcb3ccf69a94","Type":"ContainerStarted","Data":"74288d0cca53855e73a5f19c3a6b4d64250a76206cd3aef1288623e04d802baa"} Nov 25 09:39:58 crc kubenswrapper[4760]: I1125 09:39:58.867891 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7zgcv" podStartSLOduration=2.970470828 podStartE2EDuration="6.867871395s" podCreationTimestamp="2025-11-25 09:39:52 +0000 UTC" firstStartedPulling="2025-11-25 09:39:53.777305767 +0000 UTC m=+5327.486336562" lastFinishedPulling="2025-11-25 09:39:57.674706334 +0000 UTC m=+5331.383737129" observedRunningTime="2025-11-25 09:39:58.860016581 +0000 UTC m=+5332.569047386" watchObservedRunningTime="2025-11-25 09:39:58.867871395 +0000 UTC m=+5332.576902190" Nov 25 09:39:59 crc kubenswrapper[4760]: I1125 09:39:59.864962 4760 generic.go:334] "Generic (PLEG): container finished" podID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerID="74288d0cca53855e73a5f19c3a6b4d64250a76206cd3aef1288623e04d802baa" exitCode=0 Nov 25 09:39:59 crc kubenswrapper[4760]: I1125 09:39:59.865070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpnq8" event={"ID":"9b81967a-61eb-4ac4-954b-bcb3ccf69a94","Type":"ContainerDied","Data":"74288d0cca53855e73a5f19c3a6b4d64250a76206cd3aef1288623e04d802baa"} Nov 25 09:40:00 crc kubenswrapper[4760]: I1125 09:40:00.877021 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpnq8" event={"ID":"9b81967a-61eb-4ac4-954b-bcb3ccf69a94","Type":"ContainerStarted","Data":"42f864a1baf97cf57190707619d39aeb02e634de1895508e543cbaf28e764eb3"} Nov 25 09:40:00 crc kubenswrapper[4760]: I1125 09:40:00.899773 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpnq8" podStartSLOduration=2.148067395 podStartE2EDuration="5.899750221s" podCreationTimestamp="2025-11-25 09:39:55 +0000 UTC" firstStartedPulling="2025-11-25 09:39:56.820101735 +0000 UTC m=+5330.529132530" lastFinishedPulling="2025-11-25 09:40:00.571784561 +0000 UTC m=+5334.280815356" observedRunningTime="2025-11-25 09:40:00.891727233 +0000 UTC m=+5334.600758038" watchObservedRunningTime="2025-11-25 09:40:00.899750221 +0000 UTC m=+5334.608781016" Nov 25 09:40:01 crc kubenswrapper[4760]: I1125 09:40:01.886905 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerID="6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79" exitCode=0 Nov 25 09:40:01 crc kubenswrapper[4760]: I1125 09:40:01.886993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85k6c" event={"ID":"9ea6db83-4aa8-4e22-bcfb-390d0297456d","Type":"ContainerDied","Data":"6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79"} Nov 25 09:40:01 crc kubenswrapper[4760]: I1125 09:40:01.938597 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:40:01 crc kubenswrapper[4760]: E1125 09:40:01.938915 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:40:02 crc kubenswrapper[4760]: I1125 09:40:02.898220 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85k6c" event={"ID":"9ea6db83-4aa8-4e22-bcfb-390d0297456d","Type":"ContainerStarted","Data":"6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4"} Nov 25 09:40:02 crc kubenswrapper[4760]: I1125 09:40:02.920734 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-85k6c" podStartSLOduration=2.410354917 podStartE2EDuration="10.920711878s" podCreationTimestamp="2025-11-25 09:39:52 +0000 UTC" firstStartedPulling="2025-11-25 09:39:53.772549862 +0000 UTC m=+5327.481580647" lastFinishedPulling="2025-11-25 09:40:02.282906823 +0000 UTC m=+5335.991937608" observedRunningTime="2025-11-25 09:40:02.915832559 +0000 UTC m=+5336.624863354" watchObservedRunningTime="2025-11-25 09:40:02.920711878 +0000 UTC m=+5336.629742673" Nov 25 09:40:03 crc kubenswrapper[4760]: I1125 09:40:03.038490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:40:03 crc kubenswrapper[4760]: I1125 09:40:03.038581 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:40:04 crc kubenswrapper[4760]: I1125 09:40:04.089669 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7zgcv" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:04 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:04 crc kubenswrapper[4760]: > Nov 25 09:40:05 crc kubenswrapper[4760]: I1125 09:40:05.420512 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:40:05 crc kubenswrapper[4760]: I1125 09:40:05.421451 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:40:06 crc kubenswrapper[4760]: I1125 09:40:06.477541 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jpnq8" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:06 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:06 crc kubenswrapper[4760]: > Nov 25 09:40:12 crc kubenswrapper[4760]: I1125 09:40:12.435792 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:40:12 crc kubenswrapper[4760]: I1125 09:40:12.436309 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:40:13 crc kubenswrapper[4760]: I1125 09:40:13.484179 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85k6c" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:13 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:13 crc kubenswrapper[4760]: > Nov 25 09:40:14 crc kubenswrapper[4760]: I1125 09:40:14.098222 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7zgcv" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:14 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:14 crc kubenswrapper[4760]: > Nov 25 09:40:15 crc kubenswrapper[4760]: I1125 09:40:15.491555 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:40:15 crc kubenswrapper[4760]: I1125 09:40:15.544908 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:40:15 crc kubenswrapper[4760]: I1125 09:40:15.728102 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpnq8"] Nov 25 09:40:15 crc kubenswrapper[4760]: I1125 09:40:15.938566 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:40:15 crc kubenswrapper[4760]: E1125 09:40:15.939193 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:40:17 crc kubenswrapper[4760]: I1125 09:40:17.024742 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpnq8" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="registry-server" containerID="cri-o://42f864a1baf97cf57190707619d39aeb02e634de1895508e543cbaf28e764eb3" gracePeriod=2 Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.048049 4760 generic.go:334] "Generic (PLEG): container finished" podID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerID="42f864a1baf97cf57190707619d39aeb02e634de1895508e543cbaf28e764eb3" exitCode=0 Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.048127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpnq8" event={"ID":"9b81967a-61eb-4ac4-954b-bcb3ccf69a94","Type":"ContainerDied","Data":"42f864a1baf97cf57190707619d39aeb02e634de1895508e543cbaf28e764eb3"} Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.323716 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.446703 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-utilities\") pod \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.446866 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-catalog-content\") pod \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.446905 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c52qx\" (UniqueName: \"kubernetes.io/projected/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-kube-api-access-c52qx\") pod \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\" (UID: \"9b81967a-61eb-4ac4-954b-bcb3ccf69a94\") " Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.448212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-utilities" (OuterVolumeSpecName: "utilities") pod "9b81967a-61eb-4ac4-954b-bcb3ccf69a94" (UID: "9b81967a-61eb-4ac4-954b-bcb3ccf69a94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.461494 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-kube-api-access-c52qx" (OuterVolumeSpecName: "kube-api-access-c52qx") pod "9b81967a-61eb-4ac4-954b-bcb3ccf69a94" (UID: "9b81967a-61eb-4ac4-954b-bcb3ccf69a94"). InnerVolumeSpecName "kube-api-access-c52qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.473606 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b81967a-61eb-4ac4-954b-bcb3ccf69a94" (UID: "9b81967a-61eb-4ac4-954b-bcb3ccf69a94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.549205 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.549239 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c52qx\" (UniqueName: \"kubernetes.io/projected/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-kube-api-access-c52qx\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:18 crc kubenswrapper[4760]: I1125 09:40:18.549271 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81967a-61eb-4ac4-954b-bcb3ccf69a94-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:19 crc kubenswrapper[4760]: I1125 09:40:19.059455 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpnq8" event={"ID":"9b81967a-61eb-4ac4-954b-bcb3ccf69a94","Type":"ContainerDied","Data":"08677396a4704257000ba088b71329069229d74e03cf5d2504a6c9887d3eb5f0"} Nov 25 09:40:19 crc kubenswrapper[4760]: I1125 09:40:19.059511 4760 scope.go:117] "RemoveContainer" containerID="42f864a1baf97cf57190707619d39aeb02e634de1895508e543cbaf28e764eb3" Nov 25 09:40:19 crc kubenswrapper[4760]: I1125 09:40:19.060405 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpnq8" Nov 25 09:40:19 crc kubenswrapper[4760]: I1125 09:40:19.089928 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpnq8"] Nov 25 09:40:19 crc kubenswrapper[4760]: I1125 09:40:19.092591 4760 scope.go:117] "RemoveContainer" containerID="74288d0cca53855e73a5f19c3a6b4d64250a76206cd3aef1288623e04d802baa" Nov 25 09:40:19 crc kubenswrapper[4760]: I1125 09:40:19.105808 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpnq8"] Nov 25 09:40:19 crc kubenswrapper[4760]: I1125 09:40:19.120929 4760 scope.go:117] "RemoveContainer" containerID="3cdc714e8b6a8ff8626d82cd613e21b1eeb003adcb2d5b616d36185679ff0347" Nov 25 09:40:20 crc kubenswrapper[4760]: I1125 09:40:20.948491 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" path="/var/lib/kubelet/pods/9b81967a-61eb-4ac4-954b-bcb3ccf69a94/volumes" Nov 25 09:40:23 crc kubenswrapper[4760]: I1125 09:40:23.491021 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85k6c" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:23 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:23 crc kubenswrapper[4760]: > Nov 25 09:40:24 crc kubenswrapper[4760]: I1125 09:40:24.082583 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7zgcv" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:24 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:24 crc kubenswrapper[4760]: > Nov 25 09:40:26 crc kubenswrapper[4760]: I1125 09:40:26.947749 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:40:26 crc kubenswrapper[4760]: E1125 09:40:26.948946 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:40:33 crc kubenswrapper[4760]: I1125 09:40:33.082700 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:40:33 crc kubenswrapper[4760]: I1125 09:40:33.131857 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:40:33 crc kubenswrapper[4760]: I1125 09:40:33.324007 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zgcv"] Nov 25 09:40:33 crc kubenswrapper[4760]: I1125 09:40:33.482129 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-85k6c" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:33 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:33 crc kubenswrapper[4760]: > Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.207380 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7zgcv" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="registry-server" containerID="cri-o://a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2" gracePeriod=2 Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.824859 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.861185 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4vhg\" (UniqueName: \"kubernetes.io/projected/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-kube-api-access-z4vhg\") pod \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.861295 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-utilities\") pod \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.861364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-catalog-content\") pod \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\" (UID: \"748baa9a-b671-4f21-8b78-fa5b4c1bdcea\") " Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.862073 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-utilities" (OuterVolumeSpecName: "utilities") pod "748baa9a-b671-4f21-8b78-fa5b4c1bdcea" (UID: "748baa9a-b671-4f21-8b78-fa5b4c1bdcea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.868409 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-kube-api-access-z4vhg" (OuterVolumeSpecName: "kube-api-access-z4vhg") pod "748baa9a-b671-4f21-8b78-fa5b4c1bdcea" (UID: "748baa9a-b671-4f21-8b78-fa5b4c1bdcea"). InnerVolumeSpecName "kube-api-access-z4vhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.920819 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "748baa9a-b671-4f21-8b78-fa5b4c1bdcea" (UID: "748baa9a-b671-4f21-8b78-fa5b4c1bdcea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.963597 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4vhg\" (UniqueName: \"kubernetes.io/projected/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-kube-api-access-z4vhg\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.963642 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:34 crc kubenswrapper[4760]: I1125 09:40:34.963657 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748baa9a-b671-4f21-8b78-fa5b4c1bdcea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.218344 4760 generic.go:334] "Generic (PLEG): container finished" podID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerID="a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2" exitCode=0 Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.218394 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zgcv" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.218398 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zgcv" event={"ID":"748baa9a-b671-4f21-8b78-fa5b4c1bdcea","Type":"ContainerDied","Data":"a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2"} Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.218444 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zgcv" event={"ID":"748baa9a-b671-4f21-8b78-fa5b4c1bdcea","Type":"ContainerDied","Data":"c3d13a777d604940729c950288327d4a7c52a9d054a183e6f3b7191d26783ce5"} Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.218462 4760 scope.go:117] "RemoveContainer" containerID="a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.243888 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zgcv"] Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.244909 4760 scope.go:117] "RemoveContainer" containerID="389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.252850 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7zgcv"] Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.266356 4760 scope.go:117] "RemoveContainer" containerID="19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.308459 4760 scope.go:117] "RemoveContainer" containerID="a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2" Nov 25 09:40:35 crc kubenswrapper[4760]: E1125 09:40:35.308915 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2\": container with ID starting with a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2 not found: ID does not exist" containerID="a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.308947 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2"} err="failed to get container status \"a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2\": rpc error: code = NotFound desc = could not find container \"a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2\": container with ID starting with a3818c0c341b8f41a1e69546a0bb06aeac3f33f1b4fcc41c09a256ca6a795ef2 not found: ID does not exist" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.308970 4760 scope.go:117] "RemoveContainer" containerID="389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14" Nov 25 09:40:35 crc kubenswrapper[4760]: E1125 09:40:35.309472 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14\": container with ID starting with 389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14 not found: ID does not exist" containerID="389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.309547 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14"} err="failed to get container status \"389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14\": rpc error: code = NotFound desc = could not find container \"389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14\": container with ID starting with 389cc28c2e06e0fbd2e1b896b04f69cb0178efc9d04f4573564fe90391d72d14 not found: ID does not exist" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.309597 4760 scope.go:117] "RemoveContainer" containerID="19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb" Nov 25 09:40:35 crc kubenswrapper[4760]: E1125 09:40:35.309965 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb\": container with ID starting with 19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb not found: ID does not exist" containerID="19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb" Nov 25 09:40:35 crc kubenswrapper[4760]: I1125 09:40:35.310006 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb"} err="failed to get container status \"19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb\": rpc error: code = NotFound desc = could not find container \"19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb\": container with ID starting with 19ff5bb898be77e5ab5a9ecf98dcf6de37eedcad4d500f74e3b5559edf8e45bb not found: ID does not exist" Nov 25 09:40:36 crc kubenswrapper[4760]: I1125 09:40:36.951497 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" path="/var/lib/kubelet/pods/748baa9a-b671-4f21-8b78-fa5b4c1bdcea/volumes" Nov 25 09:40:39 crc kubenswrapper[4760]: I1125 09:40:39.937869 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:40:39 crc kubenswrapper[4760]: E1125 09:40:39.938378 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:40:42 crc kubenswrapper[4760]: I1125 09:40:42.485709 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:40:42 crc kubenswrapper[4760]: I1125 09:40:42.537340 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:40:42 crc kubenswrapper[4760]: I1125 09:40:42.726881 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85k6c"] Nov 25 09:40:44 crc kubenswrapper[4760]: I1125 09:40:44.303746 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-85k6c" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="registry-server" containerID="cri-o://6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4" gracePeriod=2 Nov 25 09:40:44 crc kubenswrapper[4760]: I1125 09:40:44.878896 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:40:44 crc kubenswrapper[4760]: I1125 09:40:44.966439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9fd2\" (UniqueName: \"kubernetes.io/projected/9ea6db83-4aa8-4e22-bcfb-390d0297456d-kube-api-access-f9fd2\") pod \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " Nov 25 09:40:44 crc kubenswrapper[4760]: I1125 09:40:44.966521 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-utilities\") pod \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " Nov 25 09:40:44 crc kubenswrapper[4760]: I1125 09:40:44.966629 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-catalog-content\") pod \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\" (UID: \"9ea6db83-4aa8-4e22-bcfb-390d0297456d\") " Nov 25 09:40:44 crc kubenswrapper[4760]: I1125 09:40:44.967487 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-utilities" (OuterVolumeSpecName: "utilities") pod "9ea6db83-4aa8-4e22-bcfb-390d0297456d" (UID: "9ea6db83-4aa8-4e22-bcfb-390d0297456d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:40:44 crc kubenswrapper[4760]: I1125 09:40:44.971590 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea6db83-4aa8-4e22-bcfb-390d0297456d-kube-api-access-f9fd2" (OuterVolumeSpecName: "kube-api-access-f9fd2") pod "9ea6db83-4aa8-4e22-bcfb-390d0297456d" (UID: "9ea6db83-4aa8-4e22-bcfb-390d0297456d"). InnerVolumeSpecName "kube-api-access-f9fd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.057063 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ea6db83-4aa8-4e22-bcfb-390d0297456d" (UID: "9ea6db83-4aa8-4e22-bcfb-390d0297456d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.069212 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9fd2\" (UniqueName: \"kubernetes.io/projected/9ea6db83-4aa8-4e22-bcfb-390d0297456d-kube-api-access-f9fd2\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.069622 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.069844 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ea6db83-4aa8-4e22-bcfb-390d0297456d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.315515 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerID="6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4" exitCode=0 Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.315579 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-85k6c" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.315589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85k6c" event={"ID":"9ea6db83-4aa8-4e22-bcfb-390d0297456d","Type":"ContainerDied","Data":"6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4"} Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.316124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-85k6c" event={"ID":"9ea6db83-4aa8-4e22-bcfb-390d0297456d","Type":"ContainerDied","Data":"ba2de55fa4fe1f1f9788972857b269a6cb70db5350e2812628e28c7f32fa2fe7"} Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.316152 4760 scope.go:117] "RemoveContainer" containerID="6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.345216 4760 scope.go:117] "RemoveContainer" containerID="6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.367302 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-85k6c"] Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.370213 4760 scope.go:117] "RemoveContainer" containerID="c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.380283 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-85k6c"] Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.415421 4760 scope.go:117] "RemoveContainer" containerID="6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4" Nov 25 09:40:45 crc kubenswrapper[4760]: E1125 09:40:45.415798 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4\": container with ID starting with 6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4 not found: ID does not exist" containerID="6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.415838 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4"} err="failed to get container status \"6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4\": rpc error: code = NotFound desc = could not find container \"6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4\": container with ID starting with 6a61aef82ad3087847a6d6b12eba9ac60034ef30f2d6b3cc44595bba987cede4 not found: ID does not exist" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.415865 4760 scope.go:117] "RemoveContainer" containerID="6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79" Nov 25 09:40:45 crc kubenswrapper[4760]: E1125 09:40:45.416205 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79\": container with ID starting with 6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79 not found: ID does not exist" containerID="6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.416358 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79"} err="failed to get container status \"6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79\": rpc error: code = NotFound desc = could not find container \"6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79\": container with ID starting with 6f07275d5e0b4b03bcc7aa878b2965458b3bd4cbb5d5623c8b297971aa86ba79 not found: ID does not exist" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.416467 4760 scope.go:117] "RemoveContainer" containerID="c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8" Nov 25 09:40:45 crc kubenswrapper[4760]: E1125 09:40:45.416890 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8\": container with ID starting with c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8 not found: ID does not exist" containerID="c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8" Nov 25 09:40:45 crc kubenswrapper[4760]: I1125 09:40:45.416929 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8"} err="failed to get container status \"c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8\": rpc error: code = NotFound desc = could not find container \"c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8\": container with ID starting with c8d0294d31a1c59019cfd18ec93a28b038ce7a9e14ce30fa4ebe8015a50284b8 not found: ID does not exist" Nov 25 09:40:46 crc kubenswrapper[4760]: I1125 09:40:46.948979 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" path="/var/lib/kubelet/pods/9ea6db83-4aa8-4e22-bcfb-390d0297456d/volumes" Nov 25 09:40:54 crc kubenswrapper[4760]: I1125 09:40:54.938437 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:40:54 crc kubenswrapper[4760]: E1125 09:40:54.941320 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:41:07 crc kubenswrapper[4760]: I1125 09:41:07.938394 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:41:07 crc kubenswrapper[4760]: E1125 09:41:07.939695 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:41:21 crc kubenswrapper[4760]: I1125 09:41:21.939233 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:41:21 crc kubenswrapper[4760]: E1125 09:41:21.939934 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:41:35 crc kubenswrapper[4760]: I1125 09:41:35.938465 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:41:35 crc kubenswrapper[4760]: E1125 09:41:35.939363 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:41:48 crc kubenswrapper[4760]: I1125 09:41:48.938661 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:41:48 crc kubenswrapper[4760]: E1125 09:41:48.939595 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:42:02 crc kubenswrapper[4760]: I1125 09:42:02.938908 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:42:02 crc kubenswrapper[4760]: E1125 09:42:02.939569 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:42:15 crc kubenswrapper[4760]: I1125 09:42:15.939034 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:42:15 crc kubenswrapper[4760]: E1125 09:42:15.940040 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:42:29 crc kubenswrapper[4760]: I1125 09:42:29.938956 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:42:29 crc kubenswrapper[4760]: E1125 09:42:29.939718 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:42:41 crc kubenswrapper[4760]: I1125 09:42:41.938987 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:42:41 crc kubenswrapper[4760]: E1125 09:42:41.940015 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:42:56 crc kubenswrapper[4760]: I1125 09:42:56.946420 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:42:56 crc kubenswrapper[4760]: E1125 09:42:56.947202 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:43:08 crc kubenswrapper[4760]: I1125 09:43:08.938688 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:43:08 crc kubenswrapper[4760]: E1125 09:43:08.939520 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:43:21 crc kubenswrapper[4760]: I1125 09:43:21.938959 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:43:21 crc kubenswrapper[4760]: E1125 09:43:21.939890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:43:33 crc kubenswrapper[4760]: I1125 09:43:33.939723 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:43:33 crc kubenswrapper[4760]: E1125 09:43:33.940530 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:43:44 crc kubenswrapper[4760]: I1125 09:43:44.938797 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:43:44 crc kubenswrapper[4760]: E1125 09:43:44.939708 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:43:57 crc kubenswrapper[4760]: I1125 09:43:57.938348 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:43:57 crc kubenswrapper[4760]: E1125 09:43:57.939041 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:44:08 crc kubenswrapper[4760]: I1125 09:44:08.938733 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:44:09 crc kubenswrapper[4760]: I1125 09:44:09.234497 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"451ab7b2f8d4391ddcacaea339ce17fb286ea60060c2397b78ca3d7e383a89c6"} Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.161450 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz"] Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162417 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="extract-utilities" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162433 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="extract-utilities" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162445 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="extract-utilities" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162452 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="extract-utilities" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162468 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="extract-content" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162474 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="extract-content" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162501 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162507 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162525 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162531 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162542 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="extract-utilities" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162547 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="extract-utilities" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162558 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162564 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162575 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="extract-content" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162580 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="extract-content" Nov 25 09:45:00 crc kubenswrapper[4760]: E1125 09:45:00.162594 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="extract-content" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162599 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="extract-content" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162780 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea6db83-4aa8-4e22-bcfb-390d0297456d" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162797 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b81967a-61eb-4ac4-954b-bcb3ccf69a94" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.162811 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="748baa9a-b671-4f21-8b78-fa5b4c1bdcea" containerName="registry-server" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.163635 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.166186 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.170534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.176066 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz"] Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.284940 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5df4b1db-3f56-44f6-9e36-121c251339f1-secret-volume\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.285303 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5df4b1db-3f56-44f6-9e36-121c251339f1-config-volume\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.285642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cts7\" (UniqueName: \"kubernetes.io/projected/5df4b1db-3f56-44f6-9e36-121c251339f1-kube-api-access-8cts7\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.387624 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5df4b1db-3f56-44f6-9e36-121c251339f1-secret-volume\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.387730 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5df4b1db-3f56-44f6-9e36-121c251339f1-config-volume\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.387777 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cts7\" (UniqueName: \"kubernetes.io/projected/5df4b1db-3f56-44f6-9e36-121c251339f1-kube-api-access-8cts7\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.389232 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5df4b1db-3f56-44f6-9e36-121c251339f1-config-volume\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.394960 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5df4b1db-3f56-44f6-9e36-121c251339f1-secret-volume\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.407902 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cts7\" (UniqueName: \"kubernetes.io/projected/5df4b1db-3f56-44f6-9e36-121c251339f1-kube-api-access-8cts7\") pod \"collect-profiles-29401065-blvdz\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.507528 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:00 crc kubenswrapper[4760]: I1125 09:45:00.995234 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz"] Nov 25 09:45:01 crc kubenswrapper[4760]: I1125 09:45:01.731046 4760 generic.go:334] "Generic (PLEG): container finished" podID="5df4b1db-3f56-44f6-9e36-121c251339f1" containerID="d917512a8db434e7f7f0b18a7a41e1d02eccf50854b54f3ee2e4e9307802be51" exitCode=0 Nov 25 09:45:01 crc kubenswrapper[4760]: I1125 09:45:01.731529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" event={"ID":"5df4b1db-3f56-44f6-9e36-121c251339f1","Type":"ContainerDied","Data":"d917512a8db434e7f7f0b18a7a41e1d02eccf50854b54f3ee2e4e9307802be51"} Nov 25 09:45:01 crc kubenswrapper[4760]: I1125 09:45:01.731568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" event={"ID":"5df4b1db-3f56-44f6-9e36-121c251339f1","Type":"ContainerStarted","Data":"01430465a7748c4e5f3c3c0150a59600acc6c4a0d64cafc4809a23de867181bc"} Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.271749 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.375737 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cts7\" (UniqueName: \"kubernetes.io/projected/5df4b1db-3f56-44f6-9e36-121c251339f1-kube-api-access-8cts7\") pod \"5df4b1db-3f56-44f6-9e36-121c251339f1\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.375815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5df4b1db-3f56-44f6-9e36-121c251339f1-config-volume\") pod \"5df4b1db-3f56-44f6-9e36-121c251339f1\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.375856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5df4b1db-3f56-44f6-9e36-121c251339f1-secret-volume\") pod \"5df4b1db-3f56-44f6-9e36-121c251339f1\" (UID: \"5df4b1db-3f56-44f6-9e36-121c251339f1\") " Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.377047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df4b1db-3f56-44f6-9e36-121c251339f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "5df4b1db-3f56-44f6-9e36-121c251339f1" (UID: "5df4b1db-3f56-44f6-9e36-121c251339f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.384934 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df4b1db-3f56-44f6-9e36-121c251339f1-kube-api-access-8cts7" (OuterVolumeSpecName: "kube-api-access-8cts7") pod "5df4b1db-3f56-44f6-9e36-121c251339f1" (UID: "5df4b1db-3f56-44f6-9e36-121c251339f1"). InnerVolumeSpecName "kube-api-access-8cts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.391793 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df4b1db-3f56-44f6-9e36-121c251339f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5df4b1db-3f56-44f6-9e36-121c251339f1" (UID: "5df4b1db-3f56-44f6-9e36-121c251339f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.479504 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cts7\" (UniqueName: \"kubernetes.io/projected/5df4b1db-3f56-44f6-9e36-121c251339f1-kube-api-access-8cts7\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.479589 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5df4b1db-3f56-44f6-9e36-121c251339f1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.479603 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5df4b1db-3f56-44f6-9e36-121c251339f1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.753792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" event={"ID":"5df4b1db-3f56-44f6-9e36-121c251339f1","Type":"ContainerDied","Data":"01430465a7748c4e5f3c3c0150a59600acc6c4a0d64cafc4809a23de867181bc"} Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.753851 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01430465a7748c4e5f3c3c0150a59600acc6c4a0d64cafc4809a23de867181bc" Nov 25 09:45:03 crc kubenswrapper[4760]: I1125 09:45:03.753918 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz" Nov 25 09:45:04 crc kubenswrapper[4760]: I1125 09:45:04.364285 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl"] Nov 25 09:45:04 crc kubenswrapper[4760]: I1125 09:45:04.374275 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-rvgpl"] Nov 25 09:45:04 crc kubenswrapper[4760]: I1125 09:45:04.956686 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe1b33a-f393-4568-b5f0-7e2c57083e36" path="/var/lib/kubelet/pods/0fe1b33a-f393-4568-b5f0-7e2c57083e36/volumes" Nov 25 09:45:35 crc kubenswrapper[4760]: I1125 09:45:35.322294 4760 scope.go:117] "RemoveContainer" containerID="c49a0c5b698ef50bc3a85fef5dc2fdcfded5d751afce3e2ac6b7e0a06a9d9016" Nov 25 09:46:31 crc kubenswrapper[4760]: I1125 09:46:31.746567 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:46:31 crc kubenswrapper[4760]: I1125 09:46:31.747105 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:47:01 crc kubenswrapper[4760]: I1125 09:47:01.746321 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:47:01 crc kubenswrapper[4760]: I1125 09:47:01.746887 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:47:31 crc kubenswrapper[4760]: I1125 09:47:31.746060 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:47:31 crc kubenswrapper[4760]: I1125 09:47:31.746745 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:47:31 crc kubenswrapper[4760]: I1125 09:47:31.746801 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:47:31 crc kubenswrapper[4760]: I1125 09:47:31.747659 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"451ab7b2f8d4391ddcacaea339ce17fb286ea60060c2397b78ca3d7e383a89c6"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:47:31 crc kubenswrapper[4760]: I1125 09:47:31.747715 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://451ab7b2f8d4391ddcacaea339ce17fb286ea60060c2397b78ca3d7e383a89c6" gracePeriod=600 Nov 25 09:47:32 crc kubenswrapper[4760]: I1125 09:47:32.107154 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="451ab7b2f8d4391ddcacaea339ce17fb286ea60060c2397b78ca3d7e383a89c6" exitCode=0 Nov 25 09:47:32 crc kubenswrapper[4760]: I1125 09:47:32.107533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"451ab7b2f8d4391ddcacaea339ce17fb286ea60060c2397b78ca3d7e383a89c6"} Nov 25 09:47:32 crc kubenswrapper[4760]: I1125 09:47:32.107567 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab"} Nov 25 09:47:32 crc kubenswrapper[4760]: I1125 09:47:32.107588 4760 scope.go:117] "RemoveContainer" containerID="5e56c6390541e0b758ede3c18c304aa820f47c946d34c8464776613a8878bf77" Nov 25 09:49:19 crc kubenswrapper[4760]: I1125 09:49:19.935475 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wrlpq"] Nov 25 09:49:19 crc kubenswrapper[4760]: E1125 09:49:19.940111 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df4b1db-3f56-44f6-9e36-121c251339f1" containerName="collect-profiles" Nov 25 09:49:19 crc kubenswrapper[4760]: I1125 09:49:19.940140 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df4b1db-3f56-44f6-9e36-121c251339f1" containerName="collect-profiles" Nov 25 09:49:19 crc kubenswrapper[4760]: I1125 09:49:19.940369 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df4b1db-3f56-44f6-9e36-121c251339f1" containerName="collect-profiles" Nov 25 09:49:19 crc kubenswrapper[4760]: I1125 09:49:19.942287 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:19 crc kubenswrapper[4760]: I1125 09:49:19.948045 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrlpq"] Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.057018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndcln\" (UniqueName: \"kubernetes.io/projected/33cb26b6-91bf-4730-83c3-34394b39c26a-kube-api-access-ndcln\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.057103 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-utilities\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.057213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-catalog-content\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.159930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndcln\" (UniqueName: \"kubernetes.io/projected/33cb26b6-91bf-4730-83c3-34394b39c26a-kube-api-access-ndcln\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.160003 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-utilities\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.160036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-catalog-content\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.160662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-catalog-content\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.160745 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-utilities\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.180524 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndcln\" (UniqueName: \"kubernetes.io/projected/33cb26b6-91bf-4730-83c3-34394b39c26a-kube-api-access-ndcln\") pod \"community-operators-wrlpq\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.267735 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:20 crc kubenswrapper[4760]: I1125 09:49:20.761311 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrlpq"] Nov 25 09:49:21 crc kubenswrapper[4760]: I1125 09:49:21.146892 4760 generic.go:334] "Generic (PLEG): container finished" podID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerID="6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6" exitCode=0 Nov 25 09:49:21 crc kubenswrapper[4760]: I1125 09:49:21.147010 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrlpq" event={"ID":"33cb26b6-91bf-4730-83c3-34394b39c26a","Type":"ContainerDied","Data":"6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6"} Nov 25 09:49:21 crc kubenswrapper[4760]: I1125 09:49:21.147480 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrlpq" event={"ID":"33cb26b6-91bf-4730-83c3-34394b39c26a","Type":"ContainerStarted","Data":"3733ea00016ea9cf112906c44d338f0f625428103ca6da31606e2be21813fd8d"} Nov 25 09:49:21 crc kubenswrapper[4760]: I1125 09:49:21.148482 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:49:22 crc kubenswrapper[4760]: I1125 09:49:22.158431 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrlpq" event={"ID":"33cb26b6-91bf-4730-83c3-34394b39c26a","Type":"ContainerStarted","Data":"66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7"} Nov 25 09:49:23 crc kubenswrapper[4760]: I1125 09:49:23.169685 4760 generic.go:334] "Generic (PLEG): container finished" podID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerID="66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7" exitCode=0 Nov 25 09:49:23 crc kubenswrapper[4760]: I1125 09:49:23.169804 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrlpq" event={"ID":"33cb26b6-91bf-4730-83c3-34394b39c26a","Type":"ContainerDied","Data":"66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7"} Nov 25 09:49:24 crc kubenswrapper[4760]: I1125 09:49:24.183392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrlpq" event={"ID":"33cb26b6-91bf-4730-83c3-34394b39c26a","Type":"ContainerStarted","Data":"4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371"} Nov 25 09:49:24 crc kubenswrapper[4760]: I1125 09:49:24.201704 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wrlpq" podStartSLOduration=2.649608293 podStartE2EDuration="5.201677135s" podCreationTimestamp="2025-11-25 09:49:19 +0000 UTC" firstStartedPulling="2025-11-25 09:49:21.148292533 +0000 UTC m=+5894.857323328" lastFinishedPulling="2025-11-25 09:49:23.700361335 +0000 UTC m=+5897.409392170" observedRunningTime="2025-11-25 09:49:24.199235065 +0000 UTC m=+5897.908265890" watchObservedRunningTime="2025-11-25 09:49:24.201677135 +0000 UTC m=+5897.910707950" Nov 25 09:49:30 crc kubenswrapper[4760]: I1125 09:49:30.268032 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:30 crc kubenswrapper[4760]: I1125 09:49:30.268567 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:30 crc kubenswrapper[4760]: I1125 09:49:30.322210 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:31 crc kubenswrapper[4760]: I1125 09:49:31.308122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:31 crc kubenswrapper[4760]: I1125 09:49:31.368282 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrlpq"] Nov 25 09:49:33 crc kubenswrapper[4760]: I1125 09:49:33.275574 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wrlpq" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="registry-server" containerID="cri-o://4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371" gracePeriod=2 Nov 25 09:49:33 crc kubenswrapper[4760]: I1125 09:49:33.883448 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:33 crc kubenswrapper[4760]: I1125 09:49:33.947774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-catalog-content\") pod \"33cb26b6-91bf-4730-83c3-34394b39c26a\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " Nov 25 09:49:33 crc kubenswrapper[4760]: I1125 09:49:33.947904 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-utilities\") pod \"33cb26b6-91bf-4730-83c3-34394b39c26a\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " Nov 25 09:49:33 crc kubenswrapper[4760]: I1125 09:49:33.947945 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndcln\" (UniqueName: \"kubernetes.io/projected/33cb26b6-91bf-4730-83c3-34394b39c26a-kube-api-access-ndcln\") pod \"33cb26b6-91bf-4730-83c3-34394b39c26a\" (UID: \"33cb26b6-91bf-4730-83c3-34394b39c26a\") " Nov 25 09:49:33 crc kubenswrapper[4760]: I1125 09:49:33.948935 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-utilities" (OuterVolumeSpecName: "utilities") pod "33cb26b6-91bf-4730-83c3-34394b39c26a" (UID: "33cb26b6-91bf-4730-83c3-34394b39c26a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:49:33 crc kubenswrapper[4760]: I1125 09:49:33.957936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cb26b6-91bf-4730-83c3-34394b39c26a-kube-api-access-ndcln" (OuterVolumeSpecName: "kube-api-access-ndcln") pod "33cb26b6-91bf-4730-83c3-34394b39c26a" (UID: "33cb26b6-91bf-4730-83c3-34394b39c26a"). InnerVolumeSpecName "kube-api-access-ndcln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.005558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33cb26b6-91bf-4730-83c3-34394b39c26a" (UID: "33cb26b6-91bf-4730-83c3-34394b39c26a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.050355 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.050402 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cb26b6-91bf-4730-83c3-34394b39c26a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.050415 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndcln\" (UniqueName: \"kubernetes.io/projected/33cb26b6-91bf-4730-83c3-34394b39c26a-kube-api-access-ndcln\") on node \"crc\" DevicePath \"\"" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.290064 4760 generic.go:334] "Generic (PLEG): container finished" podID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerID="4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371" exitCode=0 Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.290148 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrlpq" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.290162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrlpq" event={"ID":"33cb26b6-91bf-4730-83c3-34394b39c26a","Type":"ContainerDied","Data":"4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371"} Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.290758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrlpq" event={"ID":"33cb26b6-91bf-4730-83c3-34394b39c26a","Type":"ContainerDied","Data":"3733ea00016ea9cf112906c44d338f0f625428103ca6da31606e2be21813fd8d"} Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.290787 4760 scope.go:117] "RemoveContainer" containerID="4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.329541 4760 scope.go:117] "RemoveContainer" containerID="66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.340159 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrlpq"] Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.354892 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wrlpq"] Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.370193 4760 scope.go:117] "RemoveContainer" containerID="6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.414785 4760 scope.go:117] "RemoveContainer" containerID="4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371" Nov 25 09:49:34 crc kubenswrapper[4760]: E1125 09:49:34.416367 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371\": container with ID starting with 4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371 not found: ID does not exist" containerID="4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.416411 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371"} err="failed to get container status \"4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371\": rpc error: code = NotFound desc = could not find container \"4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371\": container with ID starting with 4c3c2177d78c7faef49a78d852d574cd4d4cb1c9a15dfaa00e12752d9b860371 not found: ID does not exist" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.416443 4760 scope.go:117] "RemoveContainer" containerID="66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7" Nov 25 09:49:34 crc kubenswrapper[4760]: E1125 09:49:34.417343 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7\": container with ID starting with 66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7 not found: ID does not exist" containerID="66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.417369 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7"} err="failed to get container status \"66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7\": rpc error: code = NotFound desc = could not find container \"66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7\": container with ID starting with 66cdc0097329997d998ad9aa4fe7e5541e75a426c528c7cc2ebd8ae926183de7 not found: ID does not exist" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.417383 4760 scope.go:117] "RemoveContainer" containerID="6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6" Nov 25 09:49:34 crc kubenswrapper[4760]: E1125 09:49:34.418165 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6\": container with ID starting with 6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6 not found: ID does not exist" containerID="6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.418229 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6"} err="failed to get container status \"6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6\": rpc error: code = NotFound desc = could not find container \"6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6\": container with ID starting with 6b4f5c4660bef3abfbc3913a993e8976871b239d9e3986524a6e557bc5876df6 not found: ID does not exist" Nov 25 09:49:34 crc kubenswrapper[4760]: I1125 09:49:34.950560 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" path="/var/lib/kubelet/pods/33cb26b6-91bf-4730-83c3-34394b39c26a/volumes" Nov 25 09:50:01 crc kubenswrapper[4760]: I1125 09:50:01.746427 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:50:01 crc kubenswrapper[4760]: I1125 09:50:01.747015 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.194687 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vp7tp"] Nov 25 09:50:21 crc kubenswrapper[4760]: E1125 09:50:21.195543 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="registry-server" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.195555 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="registry-server" Nov 25 09:50:21 crc kubenswrapper[4760]: E1125 09:50:21.195582 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="extract-content" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.195588 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="extract-content" Nov 25 09:50:21 crc kubenswrapper[4760]: E1125 09:50:21.195603 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="extract-utilities" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.195609 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="extract-utilities" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.195778 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cb26b6-91bf-4730-83c3-34394b39c26a" containerName="registry-server" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.197112 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.206229 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vp7tp"] Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.249401 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-utilities\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.249645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-catalog-content\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.249673 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvhls\" (UniqueName: \"kubernetes.io/projected/d5381617-8956-495e-b0be-8b4e2130cb9d-kube-api-access-vvhls\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.351963 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-catalog-content\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.352031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvhls\" (UniqueName: \"kubernetes.io/projected/d5381617-8956-495e-b0be-8b4e2130cb9d-kube-api-access-vvhls\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.352061 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-utilities\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.352721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-utilities\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.352984 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-catalog-content\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.375008 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvhls\" (UniqueName: \"kubernetes.io/projected/d5381617-8956-495e-b0be-8b4e2130cb9d-kube-api-access-vvhls\") pod \"redhat-operators-vp7tp\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.520045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:21 crc kubenswrapper[4760]: I1125 09:50:21.991431 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vp7tp"] Nov 25 09:50:22 crc kubenswrapper[4760]: I1125 09:50:22.754879 4760 generic.go:334] "Generic (PLEG): container finished" podID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerID="346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab" exitCode=0 Nov 25 09:50:22 crc kubenswrapper[4760]: I1125 09:50:22.754993 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7tp" event={"ID":"d5381617-8956-495e-b0be-8b4e2130cb9d","Type":"ContainerDied","Data":"346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab"} Nov 25 09:50:22 crc kubenswrapper[4760]: I1125 09:50:22.756857 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7tp" event={"ID":"d5381617-8956-495e-b0be-8b4e2130cb9d","Type":"ContainerStarted","Data":"a07ee838cb14faa20dc3e89d8691cd7f75ab53aef93048f3141c2f0f7d358086"} Nov 25 09:50:23 crc kubenswrapper[4760]: I1125 09:50:23.767269 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7tp" event={"ID":"d5381617-8956-495e-b0be-8b4e2130cb9d","Type":"ContainerStarted","Data":"f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d"} Nov 25 09:50:29 crc kubenswrapper[4760]: I1125 09:50:29.825623 4760 generic.go:334] "Generic (PLEG): container finished" podID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerID="f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d" exitCode=0 Nov 25 09:50:29 crc kubenswrapper[4760]: I1125 09:50:29.825753 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7tp" event={"ID":"d5381617-8956-495e-b0be-8b4e2130cb9d","Type":"ContainerDied","Data":"f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d"} Nov 25 09:50:30 crc kubenswrapper[4760]: I1125 09:50:30.838319 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7tp" event={"ID":"d5381617-8956-495e-b0be-8b4e2130cb9d","Type":"ContainerStarted","Data":"903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3"} Nov 25 09:50:30 crc kubenswrapper[4760]: I1125 09:50:30.856438 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vp7tp" podStartSLOduration=2.2821200409999998 podStartE2EDuration="9.856423366s" podCreationTimestamp="2025-11-25 09:50:21 +0000 UTC" firstStartedPulling="2025-11-25 09:50:22.756723115 +0000 UTC m=+5956.465753910" lastFinishedPulling="2025-11-25 09:50:30.33102644 +0000 UTC m=+5964.040057235" observedRunningTime="2025-11-25 09:50:30.854623754 +0000 UTC m=+5964.563654549" watchObservedRunningTime="2025-11-25 09:50:30.856423366 +0000 UTC m=+5964.565454151" Nov 25 09:50:31 crc kubenswrapper[4760]: I1125 09:50:31.520434 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:31 crc kubenswrapper[4760]: I1125 09:50:31.522223 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:31 crc kubenswrapper[4760]: I1125 09:50:31.745861 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:50:31 crc kubenswrapper[4760]: I1125 09:50:31.746297 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:50:32 crc kubenswrapper[4760]: I1125 09:50:32.578451 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vp7tp" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="registry-server" probeResult="failure" output=< Nov 25 09:50:32 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:50:32 crc kubenswrapper[4760]: > Nov 25 09:50:42 crc kubenswrapper[4760]: I1125 09:50:42.578663 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vp7tp" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="registry-server" probeResult="failure" output=< Nov 25 09:50:42 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 09:50:42 crc kubenswrapper[4760]: > Nov 25 09:50:51 crc kubenswrapper[4760]: I1125 09:50:51.570019 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:51 crc kubenswrapper[4760]: I1125 09:50:51.614642 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:52 crc kubenswrapper[4760]: I1125 09:50:52.395943 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vp7tp"] Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.038091 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vp7tp" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="registry-server" containerID="cri-o://903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3" gracePeriod=2 Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.655082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.718573 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-utilities\") pod \"d5381617-8956-495e-b0be-8b4e2130cb9d\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.718663 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-catalog-content\") pod \"d5381617-8956-495e-b0be-8b4e2130cb9d\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.718854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvhls\" (UniqueName: \"kubernetes.io/projected/d5381617-8956-495e-b0be-8b4e2130cb9d-kube-api-access-vvhls\") pod \"d5381617-8956-495e-b0be-8b4e2130cb9d\" (UID: \"d5381617-8956-495e-b0be-8b4e2130cb9d\") " Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.721438 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-utilities" (OuterVolumeSpecName: "utilities") pod "d5381617-8956-495e-b0be-8b4e2130cb9d" (UID: "d5381617-8956-495e-b0be-8b4e2130cb9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.726293 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5381617-8956-495e-b0be-8b4e2130cb9d-kube-api-access-vvhls" (OuterVolumeSpecName: "kube-api-access-vvhls") pod "d5381617-8956-495e-b0be-8b4e2130cb9d" (UID: "d5381617-8956-495e-b0be-8b4e2130cb9d"). InnerVolumeSpecName "kube-api-access-vvhls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.815603 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5381617-8956-495e-b0be-8b4e2130cb9d" (UID: "d5381617-8956-495e-b0be-8b4e2130cb9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.821861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvhls\" (UniqueName: \"kubernetes.io/projected/d5381617-8956-495e-b0be-8b4e2130cb9d-kube-api-access-vvhls\") on node \"crc\" DevicePath \"\"" Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.821886 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:50:53 crc kubenswrapper[4760]: I1125 09:50:53.821896 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5381617-8956-495e-b0be-8b4e2130cb9d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.049535 4760 generic.go:334] "Generic (PLEG): container finished" podID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerID="903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3" exitCode=0 Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.049890 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7tp" event={"ID":"d5381617-8956-495e-b0be-8b4e2130cb9d","Type":"ContainerDied","Data":"903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3"} Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.049928 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp7tp" event={"ID":"d5381617-8956-495e-b0be-8b4e2130cb9d","Type":"ContainerDied","Data":"a07ee838cb14faa20dc3e89d8691cd7f75ab53aef93048f3141c2f0f7d358086"} Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.049951 4760 scope.go:117] "RemoveContainer" containerID="903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.050120 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp7tp" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.090712 4760 scope.go:117] "RemoveContainer" containerID="f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.092392 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vp7tp"] Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.102508 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vp7tp"] Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.113111 4760 scope.go:117] "RemoveContainer" containerID="346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.160961 4760 scope.go:117] "RemoveContainer" containerID="903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3" Nov 25 09:50:54 crc kubenswrapper[4760]: E1125 09:50:54.161895 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3\": container with ID starting with 903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3 not found: ID does not exist" containerID="903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.161956 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3"} err="failed to get container status \"903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3\": rpc error: code = NotFound desc = could not find container \"903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3\": container with ID starting with 903193f56e643c5064d12b4f1b08b7d56fc5aa8416cb3879461cf3e23bcd89e3 not found: ID does not exist" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.161991 4760 scope.go:117] "RemoveContainer" containerID="f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d" Nov 25 09:50:54 crc kubenswrapper[4760]: E1125 09:50:54.162380 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d\": container with ID starting with f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d not found: ID does not exist" containerID="f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.162412 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d"} err="failed to get container status \"f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d\": rpc error: code = NotFound desc = could not find container \"f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d\": container with ID starting with f001f9dad419b5b74d2c5ed1c200a8238cc1426e983b972594da40b2480b1a9d not found: ID does not exist" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.162432 4760 scope.go:117] "RemoveContainer" containerID="346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab" Nov 25 09:50:54 crc kubenswrapper[4760]: E1125 09:50:54.162680 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab\": container with ID starting with 346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab not found: ID does not exist" containerID="346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.162718 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab"} err="failed to get container status \"346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab\": rpc error: code = NotFound desc = could not find container \"346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab\": container with ID starting with 346ccc42feec963e6aaeba5ab3c03e6794aace08dc705e1c254617ff2c2d9eab not found: ID does not exist" Nov 25 09:50:54 crc kubenswrapper[4760]: I1125 09:50:54.950779 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" path="/var/lib/kubelet/pods/d5381617-8956-495e-b0be-8b4e2130cb9d/volumes" Nov 25 09:51:01 crc kubenswrapper[4760]: I1125 09:51:01.746877 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:51:01 crc kubenswrapper[4760]: I1125 09:51:01.747518 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:51:01 crc kubenswrapper[4760]: I1125 09:51:01.747572 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:51:01 crc kubenswrapper[4760]: I1125 09:51:01.748516 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:51:01 crc kubenswrapper[4760]: I1125 09:51:01.748582 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" gracePeriod=600 Nov 25 09:51:01 crc kubenswrapper[4760]: E1125 09:51:01.875007 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:51:02 crc kubenswrapper[4760]: I1125 09:51:02.142217 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" exitCode=0 Nov 25 09:51:02 crc kubenswrapper[4760]: I1125 09:51:02.142280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab"} Nov 25 09:51:02 crc kubenswrapper[4760]: I1125 09:51:02.142332 4760 scope.go:117] "RemoveContainer" containerID="451ab7b2f8d4391ddcacaea339ce17fb286ea60060c2397b78ca3d7e383a89c6" Nov 25 09:51:02 crc kubenswrapper[4760]: I1125 09:51:02.143217 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:51:02 crc kubenswrapper[4760]: E1125 09:51:02.143723 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.850649 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zs6g4"] Nov 25 09:51:07 crc kubenswrapper[4760]: E1125 09:51:07.851658 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="registry-server" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.851676 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="registry-server" Nov 25 09:51:07 crc kubenswrapper[4760]: E1125 09:51:07.851699 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="extract-content" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.851705 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="extract-content" Nov 25 09:51:07 crc kubenswrapper[4760]: E1125 09:51:07.851715 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="extract-utilities" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.851721 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="extract-utilities" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.851928 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5381617-8956-495e-b0be-8b4e2130cb9d" containerName="registry-server" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.853367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.867571 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs6g4"] Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.894797 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-catalog-content\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.894904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-utilities\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.894994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fftq\" (UniqueName: \"kubernetes.io/projected/365f5cb4-5fab-4ba6-89c4-39b374864af9-kube-api-access-4fftq\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.997439 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-catalog-content\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.997835 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-utilities\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.997907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fftq\" (UniqueName: \"kubernetes.io/projected/365f5cb4-5fab-4ba6-89c4-39b374864af9-kube-api-access-4fftq\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.999094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-catalog-content\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:07 crc kubenswrapper[4760]: I1125 09:51:07.999474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-utilities\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:08 crc kubenswrapper[4760]: I1125 09:51:08.024898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fftq\" (UniqueName: \"kubernetes.io/projected/365f5cb4-5fab-4ba6-89c4-39b374864af9-kube-api-access-4fftq\") pod \"certified-operators-zs6g4\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:08 crc kubenswrapper[4760]: I1125 09:51:08.176279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:08 crc kubenswrapper[4760]: I1125 09:51:08.717397 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs6g4"] Nov 25 09:51:09 crc kubenswrapper[4760]: I1125 09:51:09.215146 4760 generic.go:334] "Generic (PLEG): container finished" podID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerID="f4fc062eff954bc78070f543a1b5947944a49224c2cb17ae7bb924fb7994d852" exitCode=0 Nov 25 09:51:09 crc kubenswrapper[4760]: I1125 09:51:09.215234 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs6g4" event={"ID":"365f5cb4-5fab-4ba6-89c4-39b374864af9","Type":"ContainerDied","Data":"f4fc062eff954bc78070f543a1b5947944a49224c2cb17ae7bb924fb7994d852"} Nov 25 09:51:09 crc kubenswrapper[4760]: I1125 09:51:09.215460 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs6g4" event={"ID":"365f5cb4-5fab-4ba6-89c4-39b374864af9","Type":"ContainerStarted","Data":"63cae4a4429ce9f993d7b74329c697f0e4d25136f9120983ec425f9923ac8eb4"} Nov 25 09:51:10 crc kubenswrapper[4760]: I1125 09:51:10.226290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs6g4" event={"ID":"365f5cb4-5fab-4ba6-89c4-39b374864af9","Type":"ContainerStarted","Data":"81622a86deceb4e21d9b44775dfb4abac2bf2b3698e36a80eb558ebad4b989db"} Nov 25 09:51:11 crc kubenswrapper[4760]: I1125 09:51:11.236975 4760 generic.go:334] "Generic (PLEG): container finished" podID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerID="81622a86deceb4e21d9b44775dfb4abac2bf2b3698e36a80eb558ebad4b989db" exitCode=0 Nov 25 09:51:11 crc kubenswrapper[4760]: I1125 09:51:11.237023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs6g4" event={"ID":"365f5cb4-5fab-4ba6-89c4-39b374864af9","Type":"ContainerDied","Data":"81622a86deceb4e21d9b44775dfb4abac2bf2b3698e36a80eb558ebad4b989db"} Nov 25 09:51:12 crc kubenswrapper[4760]: I1125 09:51:12.250222 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs6g4" event={"ID":"365f5cb4-5fab-4ba6-89c4-39b374864af9","Type":"ContainerStarted","Data":"2fa05e0df936dced25a682c3371cd3ff55f2c96db5adb8c751e517db26532cc2"} Nov 25 09:51:12 crc kubenswrapper[4760]: I1125 09:51:12.277189 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zs6g4" podStartSLOduration=2.650921241 podStartE2EDuration="5.277166408s" podCreationTimestamp="2025-11-25 09:51:07 +0000 UTC" firstStartedPulling="2025-11-25 09:51:09.217790634 +0000 UTC m=+6002.926821419" lastFinishedPulling="2025-11-25 09:51:11.844035791 +0000 UTC m=+6005.553066586" observedRunningTime="2025-11-25 09:51:12.26777974 +0000 UTC m=+6005.976810535" watchObservedRunningTime="2025-11-25 09:51:12.277166408 +0000 UTC m=+6005.986197203" Nov 25 09:51:13 crc kubenswrapper[4760]: I1125 09:51:13.938142 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:51:13 crc kubenswrapper[4760]: E1125 09:51:13.938950 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:51:18 crc kubenswrapper[4760]: I1125 09:51:18.176589 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:18 crc kubenswrapper[4760]: I1125 09:51:18.178326 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:18 crc kubenswrapper[4760]: I1125 09:51:18.236730 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:18 crc kubenswrapper[4760]: I1125 09:51:18.354765 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:18 crc kubenswrapper[4760]: I1125 09:51:18.479783 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zs6g4"] Nov 25 09:51:20 crc kubenswrapper[4760]: I1125 09:51:20.320842 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zs6g4" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="registry-server" containerID="cri-o://2fa05e0df936dced25a682c3371cd3ff55f2c96db5adb8c751e517db26532cc2" gracePeriod=2 Nov 25 09:51:21 crc kubenswrapper[4760]: I1125 09:51:21.334632 4760 generic.go:334] "Generic (PLEG): container finished" podID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerID="2fa05e0df936dced25a682c3371cd3ff55f2c96db5adb8c751e517db26532cc2" exitCode=0 Nov 25 09:51:21 crc kubenswrapper[4760]: I1125 09:51:21.334744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs6g4" event={"ID":"365f5cb4-5fab-4ba6-89c4-39b374864af9","Type":"ContainerDied","Data":"2fa05e0df936dced25a682c3371cd3ff55f2c96db5adb8c751e517db26532cc2"} Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.055513 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.177601 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-catalog-content\") pod \"365f5cb4-5fab-4ba6-89c4-39b374864af9\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.177673 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fftq\" (UniqueName: \"kubernetes.io/projected/365f5cb4-5fab-4ba6-89c4-39b374864af9-kube-api-access-4fftq\") pod \"365f5cb4-5fab-4ba6-89c4-39b374864af9\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.177816 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-utilities\") pod \"365f5cb4-5fab-4ba6-89c4-39b374864af9\" (UID: \"365f5cb4-5fab-4ba6-89c4-39b374864af9\") " Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.179190 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-utilities" (OuterVolumeSpecName: "utilities") pod "365f5cb4-5fab-4ba6-89c4-39b374864af9" (UID: "365f5cb4-5fab-4ba6-89c4-39b374864af9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.190620 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365f5cb4-5fab-4ba6-89c4-39b374864af9-kube-api-access-4fftq" (OuterVolumeSpecName: "kube-api-access-4fftq") pod "365f5cb4-5fab-4ba6-89c4-39b374864af9" (UID: "365f5cb4-5fab-4ba6-89c4-39b374864af9"). InnerVolumeSpecName "kube-api-access-4fftq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.223702 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "365f5cb4-5fab-4ba6-89c4-39b374864af9" (UID: "365f5cb4-5fab-4ba6-89c4-39b374864af9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.280442 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.280481 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fftq\" (UniqueName: \"kubernetes.io/projected/365f5cb4-5fab-4ba6-89c4-39b374864af9-kube-api-access-4fftq\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.280499 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/365f5cb4-5fab-4ba6-89c4-39b374864af9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.345893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs6g4" event={"ID":"365f5cb4-5fab-4ba6-89c4-39b374864af9","Type":"ContainerDied","Data":"63cae4a4429ce9f993d7b74329c697f0e4d25136f9120983ec425f9923ac8eb4"} Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.345950 4760 scope.go:117] "RemoveContainer" containerID="2fa05e0df936dced25a682c3371cd3ff55f2c96db5adb8c751e517db26532cc2" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.345967 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs6g4" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.368017 4760 scope.go:117] "RemoveContainer" containerID="81622a86deceb4e21d9b44775dfb4abac2bf2b3698e36a80eb558ebad4b989db" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.392377 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zs6g4"] Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.393262 4760 scope.go:117] "RemoveContainer" containerID="f4fc062eff954bc78070f543a1b5947944a49224c2cb17ae7bb924fb7994d852" Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.434844 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zs6g4"] Nov 25 09:51:22 crc kubenswrapper[4760]: I1125 09:51:22.952051 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" path="/var/lib/kubelet/pods/365f5cb4-5fab-4ba6-89c4-39b374864af9/volumes" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.889463 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bmxfn"] Nov 25 09:51:24 crc kubenswrapper[4760]: E1125 09:51:24.890638 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="extract-content" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.890660 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="extract-content" Nov 25 09:51:24 crc kubenswrapper[4760]: E1125 09:51:24.890718 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="extract-utilities" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.890727 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="extract-utilities" Nov 25 09:51:24 crc kubenswrapper[4760]: E1125 09:51:24.890744 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="registry-server" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.890752 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="registry-server" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.891062 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="365f5cb4-5fab-4ba6-89c4-39b374864af9" containerName="registry-server" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.893447 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.898870 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmxfn"] Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.928463 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7tm\" (UniqueName: \"kubernetes.io/projected/11da82ce-e9dd-49ae-953d-868d00903d79-kube-api-access-lg7tm\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.928546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-catalog-content\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.928596 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-utilities\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:24 crc kubenswrapper[4760]: I1125 09:51:24.939768 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:51:24 crc kubenswrapper[4760]: E1125 09:51:24.940004 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.030586 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-utilities\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.031175 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-utilities\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.031605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg7tm\" (UniqueName: \"kubernetes.io/projected/11da82ce-e9dd-49ae-953d-868d00903d79-kube-api-access-lg7tm\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.031855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-catalog-content\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.032501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-catalog-content\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.051490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg7tm\" (UniqueName: \"kubernetes.io/projected/11da82ce-e9dd-49ae-953d-868d00903d79-kube-api-access-lg7tm\") pod \"redhat-marketplace-bmxfn\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.220493 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:25 crc kubenswrapper[4760]: I1125 09:51:25.737342 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmxfn"] Nov 25 09:51:26 crc kubenswrapper[4760]: I1125 09:51:26.423757 4760 generic.go:334] "Generic (PLEG): container finished" podID="11da82ce-e9dd-49ae-953d-868d00903d79" containerID="cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a" exitCode=0 Nov 25 09:51:26 crc kubenswrapper[4760]: I1125 09:51:26.423850 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmxfn" event={"ID":"11da82ce-e9dd-49ae-953d-868d00903d79","Type":"ContainerDied","Data":"cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a"} Nov 25 09:51:26 crc kubenswrapper[4760]: I1125 09:51:26.423990 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmxfn" event={"ID":"11da82ce-e9dd-49ae-953d-868d00903d79","Type":"ContainerStarted","Data":"40fa9dc4434078f493a3edcc2372b6e0f90c3a9d7a19080f0680db1f0edf82f1"} Nov 25 09:51:29 crc kubenswrapper[4760]: I1125 09:51:29.449559 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmxfn" event={"ID":"11da82ce-e9dd-49ae-953d-868d00903d79","Type":"ContainerStarted","Data":"da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865"} Nov 25 09:51:30 crc kubenswrapper[4760]: I1125 09:51:30.463496 4760 generic.go:334] "Generic (PLEG): container finished" podID="11da82ce-e9dd-49ae-953d-868d00903d79" containerID="da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865" exitCode=0 Nov 25 09:51:30 crc kubenswrapper[4760]: I1125 09:51:30.463585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmxfn" event={"ID":"11da82ce-e9dd-49ae-953d-868d00903d79","Type":"ContainerDied","Data":"da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865"} Nov 25 09:51:31 crc kubenswrapper[4760]: I1125 09:51:31.474703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmxfn" event={"ID":"11da82ce-e9dd-49ae-953d-868d00903d79","Type":"ContainerStarted","Data":"5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46"} Nov 25 09:51:31 crc kubenswrapper[4760]: I1125 09:51:31.502670 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bmxfn" podStartSLOduration=3.075903882 podStartE2EDuration="7.502649583s" podCreationTimestamp="2025-11-25 09:51:24 +0000 UTC" firstStartedPulling="2025-11-25 09:51:26.426002101 +0000 UTC m=+6020.135032896" lastFinishedPulling="2025-11-25 09:51:30.852747802 +0000 UTC m=+6024.561778597" observedRunningTime="2025-11-25 09:51:31.49130725 +0000 UTC m=+6025.200338065" watchObservedRunningTime="2025-11-25 09:51:31.502649583 +0000 UTC m=+6025.211680378" Nov 25 09:51:35 crc kubenswrapper[4760]: I1125 09:51:35.221146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:35 crc kubenswrapper[4760]: I1125 09:51:35.221793 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:35 crc kubenswrapper[4760]: I1125 09:51:35.268861 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:39 crc kubenswrapper[4760]: I1125 09:51:39.938435 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:51:39 crc kubenswrapper[4760]: E1125 09:51:39.939259 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:51:45 crc kubenswrapper[4760]: I1125 09:51:45.277182 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:45 crc kubenswrapper[4760]: I1125 09:51:45.326480 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmxfn"] Nov 25 09:51:45 crc kubenswrapper[4760]: I1125 09:51:45.598652 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bmxfn" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="registry-server" containerID="cri-o://5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46" gracePeriod=2 Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.182066 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.286188 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-utilities\") pod \"11da82ce-e9dd-49ae-953d-868d00903d79\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.286284 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg7tm\" (UniqueName: \"kubernetes.io/projected/11da82ce-e9dd-49ae-953d-868d00903d79-kube-api-access-lg7tm\") pod \"11da82ce-e9dd-49ae-953d-868d00903d79\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.286335 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-catalog-content\") pod \"11da82ce-e9dd-49ae-953d-868d00903d79\" (UID: \"11da82ce-e9dd-49ae-953d-868d00903d79\") " Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.287229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-utilities" (OuterVolumeSpecName: "utilities") pod "11da82ce-e9dd-49ae-953d-868d00903d79" (UID: "11da82ce-e9dd-49ae-953d-868d00903d79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.291674 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11da82ce-e9dd-49ae-953d-868d00903d79-kube-api-access-lg7tm" (OuterVolumeSpecName: "kube-api-access-lg7tm") pod "11da82ce-e9dd-49ae-953d-868d00903d79" (UID: "11da82ce-e9dd-49ae-953d-868d00903d79"). InnerVolumeSpecName "kube-api-access-lg7tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.307756 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11da82ce-e9dd-49ae-953d-868d00903d79" (UID: "11da82ce-e9dd-49ae-953d-868d00903d79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.388890 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.388937 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11da82ce-e9dd-49ae-953d-868d00903d79-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.388952 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg7tm\" (UniqueName: \"kubernetes.io/projected/11da82ce-e9dd-49ae-953d-868d00903d79-kube-api-access-lg7tm\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.609272 4760 generic.go:334] "Generic (PLEG): container finished" podID="11da82ce-e9dd-49ae-953d-868d00903d79" containerID="5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46" exitCode=0 Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.609318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmxfn" event={"ID":"11da82ce-e9dd-49ae-953d-868d00903d79","Type":"ContainerDied","Data":"5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46"} Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.609347 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmxfn" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.609368 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmxfn" event={"ID":"11da82ce-e9dd-49ae-953d-868d00903d79","Type":"ContainerDied","Data":"40fa9dc4434078f493a3edcc2372b6e0f90c3a9d7a19080f0680db1f0edf82f1"} Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.609397 4760 scope.go:117] "RemoveContainer" containerID="5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.634086 4760 scope.go:117] "RemoveContainer" containerID="da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.643441 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmxfn"] Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.655881 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmxfn"] Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.663870 4760 scope.go:117] "RemoveContainer" containerID="cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.715397 4760 scope.go:117] "RemoveContainer" containerID="5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46" Nov 25 09:51:46 crc kubenswrapper[4760]: E1125 09:51:46.715886 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46\": container with ID starting with 5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46 not found: ID does not exist" containerID="5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.715924 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46"} err="failed to get container status \"5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46\": rpc error: code = NotFound desc = could not find container \"5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46\": container with ID starting with 5d021da89bcf870512002a98ad3f0e865937497bce3f36e15424de11d3cebb46 not found: ID does not exist" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.715952 4760 scope.go:117] "RemoveContainer" containerID="da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865" Nov 25 09:51:46 crc kubenswrapper[4760]: E1125 09:51:46.716349 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865\": container with ID starting with da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865 not found: ID does not exist" containerID="da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.716378 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865"} err="failed to get container status \"da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865\": rpc error: code = NotFound desc = could not find container \"da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865\": container with ID starting with da4868bbe540b4db8d71ad62064abf1fd739f0604a8676eba0e96cb9b74af865 not found: ID does not exist" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.716398 4760 scope.go:117] "RemoveContainer" containerID="cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a" Nov 25 09:51:46 crc kubenswrapper[4760]: E1125 09:51:46.716906 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a\": container with ID starting with cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a not found: ID does not exist" containerID="cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.716961 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a"} err="failed to get container status \"cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a\": rpc error: code = NotFound desc = could not find container \"cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a\": container with ID starting with cbb5e6ecf3d12297d3a9a0492550ca90294ba1f3f75b209c4263cbaecf32d67a not found: ID does not exist" Nov 25 09:51:46 crc kubenswrapper[4760]: I1125 09:51:46.951177 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" path="/var/lib/kubelet/pods/11da82ce-e9dd-49ae-953d-868d00903d79/volumes" Nov 25 09:51:53 crc kubenswrapper[4760]: I1125 09:51:53.938983 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:51:53 crc kubenswrapper[4760]: E1125 09:51:53.939817 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:52:08 crc kubenswrapper[4760]: I1125 09:52:08.943191 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:52:08 crc kubenswrapper[4760]: E1125 09:52:08.943959 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:52:23 crc kubenswrapper[4760]: I1125 09:52:23.938561 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:52:23 crc kubenswrapper[4760]: E1125 09:52:23.940743 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:52:37 crc kubenswrapper[4760]: I1125 09:52:37.938133 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:52:37 crc kubenswrapper[4760]: E1125 09:52:37.938900 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:52:51 crc kubenswrapper[4760]: I1125 09:52:51.938026 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:52:51 crc kubenswrapper[4760]: E1125 09:52:51.938786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:53:05 crc kubenswrapper[4760]: I1125 09:53:05.938992 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:53:05 crc kubenswrapper[4760]: E1125 09:53:05.939773 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:53:18 crc kubenswrapper[4760]: I1125 09:53:18.939537 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:53:18 crc kubenswrapper[4760]: E1125 09:53:18.940306 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:53:30 crc kubenswrapper[4760]: I1125 09:53:30.938950 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:53:30 crc kubenswrapper[4760]: E1125 09:53:30.939903 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:53:45 crc kubenswrapper[4760]: I1125 09:53:45.938622 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:53:45 crc kubenswrapper[4760]: E1125 09:53:45.939353 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:53:56 crc kubenswrapper[4760]: I1125 09:53:56.944965 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:53:56 crc kubenswrapper[4760]: E1125 09:53:56.945758 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:54:08 crc kubenswrapper[4760]: I1125 09:54:08.938636 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:54:08 crc kubenswrapper[4760]: E1125 09:54:08.939412 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:54:20 crc kubenswrapper[4760]: I1125 09:54:20.947160 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:54:20 crc kubenswrapper[4760]: E1125 09:54:20.950380 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:54:35 crc kubenswrapper[4760]: I1125 09:54:35.939580 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:54:35 crc kubenswrapper[4760]: E1125 09:54:35.940914 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:54:46 crc kubenswrapper[4760]: I1125 09:54:46.946240 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:54:46 crc kubenswrapper[4760]: E1125 09:54:46.947295 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:55:01 crc kubenswrapper[4760]: I1125 09:55:01.938901 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:55:01 crc kubenswrapper[4760]: E1125 09:55:01.939744 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:55:16 crc kubenswrapper[4760]: I1125 09:55:16.945203 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:55:16 crc kubenswrapper[4760]: E1125 09:55:16.946485 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:55:30 crc kubenswrapper[4760]: I1125 09:55:30.938270 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:55:30 crc kubenswrapper[4760]: E1125 09:55:30.939379 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:55:42 crc kubenswrapper[4760]: I1125 09:55:42.938220 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:55:42 crc kubenswrapper[4760]: E1125 09:55:42.939387 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:55:54 crc kubenswrapper[4760]: I1125 09:55:54.939151 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:55:54 crc kubenswrapper[4760]: E1125 09:55:54.940009 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 09:56:09 crc kubenswrapper[4760]: I1125 09:56:09.952180 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:56:11 crc kubenswrapper[4760]: I1125 09:56:11.140018 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"dcc36475898ac36954c966e05b9522cab13d877d54bb8cd69956cf0ae84bf93b"} Nov 25 09:58:31 crc kubenswrapper[4760]: I1125 09:58:31.746618 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:58:31 crc kubenswrapper[4760]: I1125 09:58:31.747234 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:59:01 crc kubenswrapper[4760]: I1125 09:59:01.746375 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:59:01 crc kubenswrapper[4760]: I1125 09:59:01.747043 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:59:03 crc kubenswrapper[4760]: I1125 09:59:03.772631 4760 generic.go:334] "Generic (PLEG): container finished" podID="a546f694-04d6-4212-b53a-142420418b97" containerID="f797854b9d0fd441f309ce5569e4de336d7f922b9a1571fe41efdef9165774a7" exitCode=0 Nov 25 09:59:03 crc kubenswrapper[4760]: I1125 09:59:03.772673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"a546f694-04d6-4212-b53a-142420418b97","Type":"ContainerDied","Data":"f797854b9d0fd441f309ce5569e4de336d7f922b9a1571fe41efdef9165774a7"} Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.559113 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.650922 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-workdir\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651034 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651052 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-temporary\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651091 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ssh-key\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651144 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-openstack-config-secret\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651217 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sldvg\" (UniqueName: \"kubernetes.io/projected/a546f694-04d6-4212-b53a-142420418b97-kube-api-access-sldvg\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ca-certs\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651275 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ceph\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651296 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-config-data\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.651332 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-openstack-config\") pod \"a546f694-04d6-4212-b53a-142420418b97\" (UID: \"a546f694-04d6-4212-b53a-142420418b97\") " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.652079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.652552 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.654635 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-config-data" (OuterVolumeSpecName: "config-data") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.657793 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.658602 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a546f694-04d6-4212-b53a-142420418b97-kube-api-access-sldvg" (OuterVolumeSpecName: "kube-api-access-sldvg") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "kube-api-access-sldvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.660301 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.678752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ceph" (OuterVolumeSpecName: "ceph") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.691729 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.703741 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.719408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.724671 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a546f694-04d6-4212-b53a-142420418b97" (UID: "a546f694-04d6-4212-b53a-142420418b97"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.754834 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755558 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sldvg\" (UniqueName: \"kubernetes.io/projected/a546f694-04d6-4212-b53a-142420418b97-kube-api-access-sldvg\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755578 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755589 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755625 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755639 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a546f694-04d6-4212-b53a-142420418b97-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755652 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a546f694-04d6-4212-b53a-142420418b97-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755723 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.755741 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a546f694-04d6-4212-b53a-142420418b97-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.790627 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.792816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"a546f694-04d6-4212-b53a-142420418b97","Type":"ContainerDied","Data":"3c65650755f2c41dbaed8c026d0df0453690cf3c837c43e6e51828991be45cde"} Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.792870 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c65650755f2c41dbaed8c026d0df0453690cf3c837c43e6e51828991be45cde" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.792877 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.857622 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Nov 25 09:59:05 crc kubenswrapper[4760]: E1125 09:59:05.858021 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a546f694-04d6-4212-b53a-142420418b97" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858039 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a546f694-04d6-4212-b53a-142420418b97" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:59:05 crc kubenswrapper[4760]: E1125 09:59:05.858055 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="extract-utilities" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858063 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="extract-utilities" Nov 25 09:59:05 crc kubenswrapper[4760]: E1125 09:59:05.858074 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="registry-server" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858080 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="registry-server" Nov 25 09:59:05 crc kubenswrapper[4760]: E1125 09:59:05.858102 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="extract-content" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858108 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="extract-content" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858163 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858332 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a546f694-04d6-4212-b53a-142420418b97" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858343 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="11da82ce-e9dd-49ae-953d-868d00903d79" containerName="registry-server" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.858974 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.864831 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.864839 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gq598" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.864954 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.865087 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.876951 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.959874 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.959918 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9zk\" (UniqueName: \"kubernetes.io/projected/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-kube-api-access-pz9zk\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.959964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.960012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.960042 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.960084 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.960124 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.960141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.960161 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:05 crc kubenswrapper[4760]: I1125 09:59:05.960187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061515 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061703 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz9zk\" (UniqueName: \"kubernetes.io/projected/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-kube-api-access-pz9zk\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.061992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.062085 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.062136 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.062637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.063387 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.063669 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.063931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.066825 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.067095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.068957 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.070786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.080975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz9zk\" (UniqueName: \"kubernetes.io/projected/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-kube-api-access-pz9zk\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.093477 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.185372 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.742312 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Nov 25 09:59:06 crc kubenswrapper[4760]: I1125 09:59:06.803155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e","Type":"ContainerStarted","Data":"4b073015ef12eda9149e9b8e4bdeda747adb7aaf0289de0eea25dd85de7ef5a2"} Nov 25 09:59:07 crc kubenswrapper[4760]: I1125 09:59:07.814188 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e","Type":"ContainerStarted","Data":"cfc872377fce4f80091a2612f5d26d775e7f4c542e2ec9f60922ca1376d4b315"} Nov 25 09:59:08 crc kubenswrapper[4760]: I1125 09:59:08.845296 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=3.845275474 podStartE2EDuration="3.845275474s" podCreationTimestamp="2025-11-25 09:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:59:08.839996053 +0000 UTC m=+6482.549026848" watchObservedRunningTime="2025-11-25 09:59:08.845275474 +0000 UTC m=+6482.554306269" Nov 25 09:59:31 crc kubenswrapper[4760]: I1125 09:59:31.745955 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:59:31 crc kubenswrapper[4760]: I1125 09:59:31.746515 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:59:31 crc kubenswrapper[4760]: I1125 09:59:31.746563 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 09:59:31 crc kubenswrapper[4760]: I1125 09:59:31.747350 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcc36475898ac36954c966e05b9522cab13d877d54bb8cd69956cf0ae84bf93b"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:59:31 crc kubenswrapper[4760]: I1125 09:59:31.747394 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://dcc36475898ac36954c966e05b9522cab13d877d54bb8cd69956cf0ae84bf93b" gracePeriod=600 Nov 25 09:59:32 crc kubenswrapper[4760]: I1125 09:59:32.078606 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="dcc36475898ac36954c966e05b9522cab13d877d54bb8cd69956cf0ae84bf93b" exitCode=0 Nov 25 09:59:32 crc kubenswrapper[4760]: I1125 09:59:32.078660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"dcc36475898ac36954c966e05b9522cab13d877d54bb8cd69956cf0ae84bf93b"} Nov 25 09:59:32 crc kubenswrapper[4760]: I1125 09:59:32.078728 4760 scope.go:117] "RemoveContainer" containerID="6ad3de3a552c5d7f4d22bf54a489a12db4990124a774d2b0a36d9cf09de0a1ab" Nov 25 09:59:33 crc kubenswrapper[4760]: I1125 09:59:33.088631 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c"} Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.156114 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld"] Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.157966 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.160421 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.160977 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.174622 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld"] Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.309765 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78a01b0c-bdc0-4ffb-b377-afa439a600eb-config-volume\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.309838 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8xl\" (UniqueName: \"kubernetes.io/projected/78a01b0c-bdc0-4ffb-b377-afa439a600eb-kube-api-access-sv8xl\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.309880 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78a01b0c-bdc0-4ffb-b377-afa439a600eb-secret-volume\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.412085 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78a01b0c-bdc0-4ffb-b377-afa439a600eb-config-volume\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.412144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv8xl\" (UniqueName: \"kubernetes.io/projected/78a01b0c-bdc0-4ffb-b377-afa439a600eb-kube-api-access-sv8xl\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.412177 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78a01b0c-bdc0-4ffb-b377-afa439a600eb-secret-volume\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.413845 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78a01b0c-bdc0-4ffb-b377-afa439a600eb-config-volume\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.421181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78a01b0c-bdc0-4ffb-b377-afa439a600eb-secret-volume\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.434882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv8xl\" (UniqueName: \"kubernetes.io/projected/78a01b0c-bdc0-4ffb-b377-afa439a600eb-kube-api-access-sv8xl\") pod \"collect-profiles-29401080-rcdld\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:00 crc kubenswrapper[4760]: I1125 10:00:00.477860 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:01 crc kubenswrapper[4760]: I1125 10:00:01.040181 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld"] Nov 25 10:00:01 crc kubenswrapper[4760]: I1125 10:00:01.413682 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" event={"ID":"78a01b0c-bdc0-4ffb-b377-afa439a600eb","Type":"ContainerStarted","Data":"5716bc1bb3b7b5dd63b323b1a4a78f90a914b28351964e4497cbbde99f456295"} Nov 25 10:00:01 crc kubenswrapper[4760]: I1125 10:00:01.413744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" event={"ID":"78a01b0c-bdc0-4ffb-b377-afa439a600eb","Type":"ContainerStarted","Data":"dcf7724fb69f1052ba28f2ba87c350dfc476fad22c77626987b72d3a1d381a07"} Nov 25 10:00:02 crc kubenswrapper[4760]: I1125 10:00:02.433473 4760 generic.go:334] "Generic (PLEG): container finished" podID="78a01b0c-bdc0-4ffb-b377-afa439a600eb" containerID="5716bc1bb3b7b5dd63b323b1a4a78f90a914b28351964e4497cbbde99f456295" exitCode=0 Nov 25 10:00:02 crc kubenswrapper[4760]: I1125 10:00:02.433535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" event={"ID":"78a01b0c-bdc0-4ffb-b377-afa439a600eb","Type":"ContainerDied","Data":"5716bc1bb3b7b5dd63b323b1a4a78f90a914b28351964e4497cbbde99f456295"} Nov 25 10:00:03 crc kubenswrapper[4760]: I1125 10:00:03.849856 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.002642 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv8xl\" (UniqueName: \"kubernetes.io/projected/78a01b0c-bdc0-4ffb-b377-afa439a600eb-kube-api-access-sv8xl\") pod \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.002834 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78a01b0c-bdc0-4ffb-b377-afa439a600eb-config-volume\") pod \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.002931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78a01b0c-bdc0-4ffb-b377-afa439a600eb-secret-volume\") pod \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\" (UID: \"78a01b0c-bdc0-4ffb-b377-afa439a600eb\") " Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.003676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a01b0c-bdc0-4ffb-b377-afa439a600eb-config-volume" (OuterVolumeSpecName: "config-volume") pod "78a01b0c-bdc0-4ffb-b377-afa439a600eb" (UID: "78a01b0c-bdc0-4ffb-b377-afa439a600eb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.008775 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a01b0c-bdc0-4ffb-b377-afa439a600eb-kube-api-access-sv8xl" (OuterVolumeSpecName: "kube-api-access-sv8xl") pod "78a01b0c-bdc0-4ffb-b377-afa439a600eb" (UID: "78a01b0c-bdc0-4ffb-b377-afa439a600eb"). InnerVolumeSpecName "kube-api-access-sv8xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.009068 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a01b0c-bdc0-4ffb-b377-afa439a600eb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "78a01b0c-bdc0-4ffb-b377-afa439a600eb" (UID: "78a01b0c-bdc0-4ffb-b377-afa439a600eb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.105372 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78a01b0c-bdc0-4ffb-b377-afa439a600eb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.105645 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78a01b0c-bdc0-4ffb-b377-afa439a600eb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.105670 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv8xl\" (UniqueName: \"kubernetes.io/projected/78a01b0c-bdc0-4ffb-b377-afa439a600eb-kube-api-access-sv8xl\") on node \"crc\" DevicePath \"\"" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.454783 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" event={"ID":"78a01b0c-bdc0-4ffb-b377-afa439a600eb","Type":"ContainerDied","Data":"dcf7724fb69f1052ba28f2ba87c350dfc476fad22c77626987b72d3a1d381a07"} Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.455201 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf7724fb69f1052ba28f2ba87c350dfc476fad22c77626987b72d3a1d381a07" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.454878 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401080-rcdld" Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.950409 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr"] Nov 25 10:00:04 crc kubenswrapper[4760]: I1125 10:00:04.959676 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-7f5cr"] Nov 25 10:00:06 crc kubenswrapper[4760]: I1125 10:00:06.953598 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172cd1f3-5243-4c6f-910d-b29aa186283e" path="/var/lib/kubelet/pods/172cd1f3-5243-4c6f-910d-b29aa186283e/volumes" Nov 25 10:00:35 crc kubenswrapper[4760]: I1125 10:00:35.684617 4760 scope.go:117] "RemoveContainer" containerID="15117536801b19e9b04add2c5ee1d092f20ec41c7be1acbbe18ce218eaf41cac" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.180996 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401081-7bnrg"] Nov 25 10:01:00 crc kubenswrapper[4760]: E1125 10:01:00.182229 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a01b0c-bdc0-4ffb-b377-afa439a600eb" containerName="collect-profiles" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.182272 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a01b0c-bdc0-4ffb-b377-afa439a600eb" containerName="collect-profiles" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.182554 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a01b0c-bdc0-4ffb-b377-afa439a600eb" containerName="collect-profiles" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.183478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.196789 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401081-7bnrg"] Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.286931 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpmq\" (UniqueName: \"kubernetes.io/projected/d796b091-56b6-4f51-95f8-a4f01db5d9a6-kube-api-access-srpmq\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.288064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-config-data\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.288151 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-combined-ca-bundle\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.288371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-fernet-keys\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.389757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpmq\" (UniqueName: \"kubernetes.io/projected/d796b091-56b6-4f51-95f8-a4f01db5d9a6-kube-api-access-srpmq\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.389846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-config-data\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.389876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-combined-ca-bundle\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.389972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-fernet-keys\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.396682 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-combined-ca-bundle\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.396736 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-fernet-keys\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.397695 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-config-data\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.409878 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpmq\" (UniqueName: \"kubernetes.io/projected/d796b091-56b6-4f51-95f8-a4f01db5d9a6-kube-api-access-srpmq\") pod \"keystone-cron-29401081-7bnrg\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.507079 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.962659 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401081-7bnrg"] Nov 25 10:01:00 crc kubenswrapper[4760]: I1125 10:01:00.990684 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401081-7bnrg" event={"ID":"d796b091-56b6-4f51-95f8-a4f01db5d9a6","Type":"ContainerStarted","Data":"3e0e8c655108fb297da2d4bddc87e441ed01c829054b68157a8a0f5af4f84efa"} Nov 25 10:01:02 crc kubenswrapper[4760]: I1125 10:01:01.999764 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401081-7bnrg" event={"ID":"d796b091-56b6-4f51-95f8-a4f01db5d9a6","Type":"ContainerStarted","Data":"273955385c9606844d1c938424fc3a5deeb61982fddc702540b4d21f1b83a2ac"} Nov 25 10:01:02 crc kubenswrapper[4760]: I1125 10:01:02.028585 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401081-7bnrg" podStartSLOduration=2.028563775 podStartE2EDuration="2.028563775s" podCreationTimestamp="2025-11-25 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:01:02.019145736 +0000 UTC m=+6595.728176531" watchObservedRunningTime="2025-11-25 10:01:02.028563775 +0000 UTC m=+6595.737594570" Nov 25 10:01:05 crc kubenswrapper[4760]: I1125 10:01:05.027838 4760 generic.go:334] "Generic (PLEG): container finished" podID="d796b091-56b6-4f51-95f8-a4f01db5d9a6" containerID="273955385c9606844d1c938424fc3a5deeb61982fddc702540b4d21f1b83a2ac" exitCode=0 Nov 25 10:01:05 crc kubenswrapper[4760]: I1125 10:01:05.028032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401081-7bnrg" event={"ID":"d796b091-56b6-4f51-95f8-a4f01db5d9a6","Type":"ContainerDied","Data":"273955385c9606844d1c938424fc3a5deeb61982fddc702540b4d21f1b83a2ac"} Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.414672 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.534890 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpmq\" (UniqueName: \"kubernetes.io/projected/d796b091-56b6-4f51-95f8-a4f01db5d9a6-kube-api-access-srpmq\") pod \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.535022 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-config-data\") pod \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.535147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-fernet-keys\") pod \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.535318 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-combined-ca-bundle\") pod \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\" (UID: \"d796b091-56b6-4f51-95f8-a4f01db5d9a6\") " Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.555571 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d796b091-56b6-4f51-95f8-a4f01db5d9a6-kube-api-access-srpmq" (OuterVolumeSpecName: "kube-api-access-srpmq") pod "d796b091-56b6-4f51-95f8-a4f01db5d9a6" (UID: "d796b091-56b6-4f51-95f8-a4f01db5d9a6"). InnerVolumeSpecName "kube-api-access-srpmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.567406 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d796b091-56b6-4f51-95f8-a4f01db5d9a6" (UID: "d796b091-56b6-4f51-95f8-a4f01db5d9a6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.602631 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d796b091-56b6-4f51-95f8-a4f01db5d9a6" (UID: "d796b091-56b6-4f51-95f8-a4f01db5d9a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.638213 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.638639 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.638657 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpmq\" (UniqueName: \"kubernetes.io/projected/d796b091-56b6-4f51-95f8-a4f01db5d9a6-kube-api-access-srpmq\") on node \"crc\" DevicePath \"\"" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.642429 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-config-data" (OuterVolumeSpecName: "config-data") pod "d796b091-56b6-4f51-95f8-a4f01db5d9a6" (UID: "d796b091-56b6-4f51-95f8-a4f01db5d9a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:01:06 crc kubenswrapper[4760]: I1125 10:01:06.740609 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d796b091-56b6-4f51-95f8-a4f01db5d9a6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 10:01:07 crc kubenswrapper[4760]: I1125 10:01:07.049918 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401081-7bnrg" event={"ID":"d796b091-56b6-4f51-95f8-a4f01db5d9a6","Type":"ContainerDied","Data":"3e0e8c655108fb297da2d4bddc87e441ed01c829054b68157a8a0f5af4f84efa"} Nov 25 10:01:07 crc kubenswrapper[4760]: I1125 10:01:07.050166 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e0e8c655108fb297da2d4bddc87e441ed01c829054b68157a8a0f5af4f84efa" Nov 25 10:01:07 crc kubenswrapper[4760]: I1125 10:01:07.049966 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401081-7bnrg" Nov 25 10:01:08 crc kubenswrapper[4760]: E1125 10:01:08.772429 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd796b091_56b6_4f51_95f8_a4f01db5d9a6.slice\": RecentStats: unable to find data in memory cache]" Nov 25 10:01:19 crc kubenswrapper[4760]: E1125 10:01:19.013231 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd796b091_56b6_4f51_95f8_a4f01db5d9a6.slice\": RecentStats: unable to find data in memory cache]" Nov 25 10:01:29 crc kubenswrapper[4760]: E1125 10:01:29.246414 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd796b091_56b6_4f51_95f8_a4f01db5d9a6.slice\": RecentStats: unable to find data in memory cache]" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.354168 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rngcx"] Nov 25 10:01:32 crc kubenswrapper[4760]: E1125 10:01:32.355087 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d796b091-56b6-4f51-95f8-a4f01db5d9a6" containerName="keystone-cron" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.355099 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d796b091-56b6-4f51-95f8-a4f01db5d9a6" containerName="keystone-cron" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.355323 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d796b091-56b6-4f51-95f8-a4f01db5d9a6" containerName="keystone-cron" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.358488 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.366972 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rngcx"] Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.512091 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-catalog-content\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.512278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78b6h\" (UniqueName: \"kubernetes.io/projected/5e37d6c3-d42b-42f8-bc33-724a8117c79a-kube-api-access-78b6h\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.512320 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-utilities\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.613777 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78b6h\" (UniqueName: \"kubernetes.io/projected/5e37d6c3-d42b-42f8-bc33-724a8117c79a-kube-api-access-78b6h\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.613859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-utilities\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.614010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-catalog-content\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.614470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-catalog-content\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.614470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-utilities\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.634979 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78b6h\" (UniqueName: \"kubernetes.io/projected/5e37d6c3-d42b-42f8-bc33-724a8117c79a-kube-api-access-78b6h\") pod \"redhat-marketplace-rngcx\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:32 crc kubenswrapper[4760]: I1125 10:01:32.677690 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:33 crc kubenswrapper[4760]: I1125 10:01:33.166501 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rngcx"] Nov 25 10:01:33 crc kubenswrapper[4760]: I1125 10:01:33.310071 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rngcx" event={"ID":"5e37d6c3-d42b-42f8-bc33-724a8117c79a","Type":"ContainerStarted","Data":"0695e3b75ad0eef0ad393110b364bb362c699fcb362ea4022aca595aa8c80b5b"} Nov 25 10:01:34 crc kubenswrapper[4760]: I1125 10:01:34.321132 4760 generic.go:334] "Generic (PLEG): container finished" podID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerID="dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48" exitCode=0 Nov 25 10:01:34 crc kubenswrapper[4760]: I1125 10:01:34.321235 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rngcx" event={"ID":"5e37d6c3-d42b-42f8-bc33-724a8117c79a","Type":"ContainerDied","Data":"dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48"} Nov 25 10:01:34 crc kubenswrapper[4760]: I1125 10:01:34.323429 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:01:35 crc kubenswrapper[4760]: I1125 10:01:35.333857 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rngcx" event={"ID":"5e37d6c3-d42b-42f8-bc33-724a8117c79a","Type":"ContainerStarted","Data":"f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652"} Nov 25 10:01:36 crc kubenswrapper[4760]: I1125 10:01:36.421512 4760 generic.go:334] "Generic (PLEG): container finished" podID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerID="f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652" exitCode=0 Nov 25 10:01:36 crc kubenswrapper[4760]: I1125 10:01:36.421873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rngcx" event={"ID":"5e37d6c3-d42b-42f8-bc33-724a8117c79a","Type":"ContainerDied","Data":"f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652"} Nov 25 10:01:37 crc kubenswrapper[4760]: I1125 10:01:37.439081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rngcx" event={"ID":"5e37d6c3-d42b-42f8-bc33-724a8117c79a","Type":"ContainerStarted","Data":"cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6"} Nov 25 10:01:37 crc kubenswrapper[4760]: I1125 10:01:37.469920 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rngcx" podStartSLOduration=3.579786713 podStartE2EDuration="5.469895247s" podCreationTimestamp="2025-11-25 10:01:32 +0000 UTC" firstStartedPulling="2025-11-25 10:01:34.323223343 +0000 UTC m=+6628.032254138" lastFinishedPulling="2025-11-25 10:01:36.213331877 +0000 UTC m=+6629.922362672" observedRunningTime="2025-11-25 10:01:37.458275185 +0000 UTC m=+6631.167305980" watchObservedRunningTime="2025-11-25 10:01:37.469895247 +0000 UTC m=+6631.178926042" Nov 25 10:01:39 crc kubenswrapper[4760]: E1125 10:01:39.495539 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd796b091_56b6_4f51_95f8_a4f01db5d9a6.slice\": RecentStats: unable to find data in memory cache]" Nov 25 10:01:42 crc kubenswrapper[4760]: I1125 10:01:42.678806 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:42 crc kubenswrapper[4760]: I1125 10:01:42.679495 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:42 crc kubenswrapper[4760]: I1125 10:01:42.726658 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:43 crc kubenswrapper[4760]: I1125 10:01:43.564079 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:43 crc kubenswrapper[4760]: I1125 10:01:43.609124 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rngcx"] Nov 25 10:01:45 crc kubenswrapper[4760]: I1125 10:01:45.535104 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rngcx" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="registry-server" containerID="cri-o://cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6" gracePeriod=2 Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.186519 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.243663 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78b6h\" (UniqueName: \"kubernetes.io/projected/5e37d6c3-d42b-42f8-bc33-724a8117c79a-kube-api-access-78b6h\") pod \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.244098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-catalog-content\") pod \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.244145 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-utilities\") pod \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\" (UID: \"5e37d6c3-d42b-42f8-bc33-724a8117c79a\") " Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.245079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-utilities" (OuterVolumeSpecName: "utilities") pod "5e37d6c3-d42b-42f8-bc33-724a8117c79a" (UID: "5e37d6c3-d42b-42f8-bc33-724a8117c79a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.255395 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e37d6c3-d42b-42f8-bc33-724a8117c79a-kube-api-access-78b6h" (OuterVolumeSpecName: "kube-api-access-78b6h") pod "5e37d6c3-d42b-42f8-bc33-724a8117c79a" (UID: "5e37d6c3-d42b-42f8-bc33-724a8117c79a"). InnerVolumeSpecName "kube-api-access-78b6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.267579 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e37d6c3-d42b-42f8-bc33-724a8117c79a" (UID: "5e37d6c3-d42b-42f8-bc33-724a8117c79a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.345429 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.345541 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e37d6c3-d42b-42f8-bc33-724a8117c79a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.345558 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78b6h\" (UniqueName: \"kubernetes.io/projected/5e37d6c3-d42b-42f8-bc33-724a8117c79a-kube-api-access-78b6h\") on node \"crc\" DevicePath \"\"" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.546110 4760 generic.go:334] "Generic (PLEG): container finished" podID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerID="cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6" exitCode=0 Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.546152 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rngcx" event={"ID":"5e37d6c3-d42b-42f8-bc33-724a8117c79a","Type":"ContainerDied","Data":"cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6"} Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.546186 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rngcx" event={"ID":"5e37d6c3-d42b-42f8-bc33-724a8117c79a","Type":"ContainerDied","Data":"0695e3b75ad0eef0ad393110b364bb362c699fcb362ea4022aca595aa8c80b5b"} Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.546205 4760 scope.go:117] "RemoveContainer" containerID="cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.546244 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rngcx" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.581912 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rngcx"] Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.585500 4760 scope.go:117] "RemoveContainer" containerID="f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.606842 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rngcx"] Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.615520 4760 scope.go:117] "RemoveContainer" containerID="dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.673623 4760 scope.go:117] "RemoveContainer" containerID="cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6" Nov 25 10:01:46 crc kubenswrapper[4760]: E1125 10:01:46.674063 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6\": container with ID starting with cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6 not found: ID does not exist" containerID="cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.674106 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6"} err="failed to get container status \"cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6\": rpc error: code = NotFound desc = could not find container \"cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6\": container with ID starting with cc60d60928499e224f4156dba8e353f727b1eaa5cbf6b31eacb39b5eb506caf6 not found: ID does not exist" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.674134 4760 scope.go:117] "RemoveContainer" containerID="f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652" Nov 25 10:01:46 crc kubenswrapper[4760]: E1125 10:01:46.674461 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652\": container with ID starting with f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652 not found: ID does not exist" containerID="f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.674488 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652"} err="failed to get container status \"f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652\": rpc error: code = NotFound desc = could not find container \"f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652\": container with ID starting with f0ea3c868926c31f7033ab49b4547dcd26ac1bea6d89f16aadf0eda1c2d21652 not found: ID does not exist" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.674502 4760 scope.go:117] "RemoveContainer" containerID="dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48" Nov 25 10:01:46 crc kubenswrapper[4760]: E1125 10:01:46.674739 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48\": container with ID starting with dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48 not found: ID does not exist" containerID="dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.674762 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48"} err="failed to get container status \"dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48\": rpc error: code = NotFound desc = could not find container \"dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48\": container with ID starting with dd07fc17e0b8b937157cde45ac208a549b1e62042f352359fa821ca9e637ea48 not found: ID does not exist" Nov 25 10:01:46 crc kubenswrapper[4760]: I1125 10:01:46.948316 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" path="/var/lib/kubelet/pods/5e37d6c3-d42b-42f8-bc33-724a8117c79a/volumes" Nov 25 10:01:49 crc kubenswrapper[4760]: E1125 10:01:49.750481 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd796b091_56b6_4f51_95f8_a4f01db5d9a6.slice\": RecentStats: unable to find data in memory cache]" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.163205 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9rhsc"] Nov 25 10:01:51 crc kubenswrapper[4760]: E1125 10:01:51.163600 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="extract-content" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.163612 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="extract-content" Nov 25 10:01:51 crc kubenswrapper[4760]: E1125 10:01:51.163638 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="registry-server" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.163644 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="registry-server" Nov 25 10:01:51 crc kubenswrapper[4760]: E1125 10:01:51.163674 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="extract-utilities" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.163681 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="extract-utilities" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.163850 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e37d6c3-d42b-42f8-bc33-724a8117c79a" containerName="registry-server" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.165149 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.184797 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9rhsc"] Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.240288 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-catalog-content\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.240483 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-utilities\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.240525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6gx\" (UniqueName: \"kubernetes.io/projected/100d7182-c48f-4ea9-88af-b66a46ac1109-kube-api-access-8j6gx\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.342531 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-utilities\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.342662 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6gx\" (UniqueName: \"kubernetes.io/projected/100d7182-c48f-4ea9-88af-b66a46ac1109-kube-api-access-8j6gx\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.342735 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-catalog-content\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.343406 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-catalog-content\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.343600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-utilities\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.363521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6gx\" (UniqueName: \"kubernetes.io/projected/100d7182-c48f-4ea9-88af-b66a46ac1109-kube-api-access-8j6gx\") pod \"certified-operators-9rhsc\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:51 crc kubenswrapper[4760]: I1125 10:01:51.490153 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:01:52 crc kubenswrapper[4760]: I1125 10:01:52.043788 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9rhsc"] Nov 25 10:01:52 crc kubenswrapper[4760]: I1125 10:01:52.605275 4760 generic.go:334] "Generic (PLEG): container finished" podID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerID="95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c" exitCode=0 Nov 25 10:01:52 crc kubenswrapper[4760]: I1125 10:01:52.605339 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rhsc" event={"ID":"100d7182-c48f-4ea9-88af-b66a46ac1109","Type":"ContainerDied","Data":"95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c"} Nov 25 10:01:52 crc kubenswrapper[4760]: I1125 10:01:52.605541 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rhsc" event={"ID":"100d7182-c48f-4ea9-88af-b66a46ac1109","Type":"ContainerStarted","Data":"2b31caa3d5e4ce4489e7347df8f37a1ea89995da8c9b724d98de257d5fb1f7d9"} Nov 25 10:01:54 crc kubenswrapper[4760]: I1125 10:01:54.628117 4760 generic.go:334] "Generic (PLEG): container finished" podID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerID="4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb" exitCode=0 Nov 25 10:01:54 crc kubenswrapper[4760]: I1125 10:01:54.628192 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rhsc" event={"ID":"100d7182-c48f-4ea9-88af-b66a46ac1109","Type":"ContainerDied","Data":"4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb"} Nov 25 10:01:56 crc kubenswrapper[4760]: I1125 10:01:56.657625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rhsc" event={"ID":"100d7182-c48f-4ea9-88af-b66a46ac1109","Type":"ContainerStarted","Data":"effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9"} Nov 25 10:01:56 crc kubenswrapper[4760]: I1125 10:01:56.691787 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9rhsc" podStartSLOduration=2.857549468 podStartE2EDuration="5.691748597s" podCreationTimestamp="2025-11-25 10:01:51 +0000 UTC" firstStartedPulling="2025-11-25 10:01:52.607145955 +0000 UTC m=+6646.316176740" lastFinishedPulling="2025-11-25 10:01:55.441345064 +0000 UTC m=+6649.150375869" observedRunningTime="2025-11-25 10:01:56.682433991 +0000 UTC m=+6650.391464836" watchObservedRunningTime="2025-11-25 10:01:56.691748597 +0000 UTC m=+6650.400779442" Nov 25 10:02:00 crc kubenswrapper[4760]: E1125 10:02:00.016170 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd796b091_56b6_4f51_95f8_a4f01db5d9a6.slice\": RecentStats: unable to find data in memory cache]" Nov 25 10:02:01 crc kubenswrapper[4760]: I1125 10:02:01.491005 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:02:01 crc kubenswrapper[4760]: I1125 10:02:01.491399 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:02:01 crc kubenswrapper[4760]: I1125 10:02:01.534430 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:02:01 crc kubenswrapper[4760]: I1125 10:02:01.746574 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:02:01 crc kubenswrapper[4760]: I1125 10:02:01.746930 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:02:01 crc kubenswrapper[4760]: I1125 10:02:01.770358 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:02:01 crc kubenswrapper[4760]: I1125 10:02:01.836358 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9rhsc"] Nov 25 10:02:03 crc kubenswrapper[4760]: I1125 10:02:03.727938 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9rhsc" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="registry-server" containerID="cri-o://effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9" gracePeriod=2 Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.197931 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.328083 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j6gx\" (UniqueName: \"kubernetes.io/projected/100d7182-c48f-4ea9-88af-b66a46ac1109-kube-api-access-8j6gx\") pod \"100d7182-c48f-4ea9-88af-b66a46ac1109\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.328235 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-utilities\") pod \"100d7182-c48f-4ea9-88af-b66a46ac1109\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.328281 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-catalog-content\") pod \"100d7182-c48f-4ea9-88af-b66a46ac1109\" (UID: \"100d7182-c48f-4ea9-88af-b66a46ac1109\") " Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.329769 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-utilities" (OuterVolumeSpecName: "utilities") pod "100d7182-c48f-4ea9-88af-b66a46ac1109" (UID: "100d7182-c48f-4ea9-88af-b66a46ac1109"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.334446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100d7182-c48f-4ea9-88af-b66a46ac1109-kube-api-access-8j6gx" (OuterVolumeSpecName: "kube-api-access-8j6gx") pod "100d7182-c48f-4ea9-88af-b66a46ac1109" (UID: "100d7182-c48f-4ea9-88af-b66a46ac1109"). InnerVolumeSpecName "kube-api-access-8j6gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.380911 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "100d7182-c48f-4ea9-88af-b66a46ac1109" (UID: "100d7182-c48f-4ea9-88af-b66a46ac1109"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.431944 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j6gx\" (UniqueName: \"kubernetes.io/projected/100d7182-c48f-4ea9-88af-b66a46ac1109-kube-api-access-8j6gx\") on node \"crc\" DevicePath \"\"" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.432013 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.432033 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100d7182-c48f-4ea9-88af-b66a46ac1109-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.742281 4760 generic.go:334] "Generic (PLEG): container finished" podID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerID="effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9" exitCode=0 Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.742347 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rhsc" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.742360 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rhsc" event={"ID":"100d7182-c48f-4ea9-88af-b66a46ac1109","Type":"ContainerDied","Data":"effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9"} Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.742415 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rhsc" event={"ID":"100d7182-c48f-4ea9-88af-b66a46ac1109","Type":"ContainerDied","Data":"2b31caa3d5e4ce4489e7347df8f37a1ea89995da8c9b724d98de257d5fb1f7d9"} Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.742439 4760 scope.go:117] "RemoveContainer" containerID="effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.770751 4760 scope.go:117] "RemoveContainer" containerID="4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.776300 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9rhsc"] Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.785976 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9rhsc"] Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.800694 4760 scope.go:117] "RemoveContainer" containerID="95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.834326 4760 scope.go:117] "RemoveContainer" containerID="effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9" Nov 25 10:02:04 crc kubenswrapper[4760]: E1125 10:02:04.835544 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9\": container with ID starting with effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9 not found: ID does not exist" containerID="effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.835586 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9"} err="failed to get container status \"effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9\": rpc error: code = NotFound desc = could not find container \"effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9\": container with ID starting with effe4131d8b00ae61992fa826ceda982f6d889a97b55aafff1570bd6c5f7d3c9 not found: ID does not exist" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.835634 4760 scope.go:117] "RemoveContainer" containerID="4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb" Nov 25 10:02:04 crc kubenswrapper[4760]: E1125 10:02:04.836212 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb\": container with ID starting with 4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb not found: ID does not exist" containerID="4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.836235 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb"} err="failed to get container status \"4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb\": rpc error: code = NotFound desc = could not find container \"4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb\": container with ID starting with 4da6d59645ea84d9f3bf92b129b6e59072408eda624ff6f2263b72f0db16b2bb not found: ID does not exist" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.836411 4760 scope.go:117] "RemoveContainer" containerID="95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c" Nov 25 10:02:04 crc kubenswrapper[4760]: E1125 10:02:04.836737 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c\": container with ID starting with 95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c not found: ID does not exist" containerID="95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.836787 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c"} err="failed to get container status \"95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c\": rpc error: code = NotFound desc = could not find container \"95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c\": container with ID starting with 95db037a0cb8e8de1e92e5aa714d91aca4d2ff9177436276c353da0e47cc995c not found: ID does not exist" Nov 25 10:02:04 crc kubenswrapper[4760]: I1125 10:02:04.949871 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" path="/var/lib/kubelet/pods/100d7182-c48f-4ea9-88af-b66a46ac1109/volumes" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.766020 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9v5xd"] Nov 25 10:02:17 crc kubenswrapper[4760]: E1125 10:02:17.767118 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="extract-utilities" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.767137 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="extract-utilities" Nov 25 10:02:17 crc kubenswrapper[4760]: E1125 10:02:17.767154 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="extract-content" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.767161 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="extract-content" Nov 25 10:02:17 crc kubenswrapper[4760]: E1125 10:02:17.767178 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="registry-server" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.767186 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="registry-server" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.767477 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="100d7182-c48f-4ea9-88af-b66a46ac1109" containerName="registry-server" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.769167 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.789536 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v5xd"] Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.903552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfjj\" (UniqueName: \"kubernetes.io/projected/9135eb5e-038c-4c74-8942-ae626209f23f-kube-api-access-9bfjj\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.903895 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-catalog-content\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:17 crc kubenswrapper[4760]: I1125 10:02:17.904100 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-utilities\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.007051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bfjj\" (UniqueName: \"kubernetes.io/projected/9135eb5e-038c-4c74-8942-ae626209f23f-kube-api-access-9bfjj\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.007121 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-catalog-content\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.007223 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-utilities\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.007746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-utilities\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.008300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-catalog-content\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.038048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bfjj\" (UniqueName: \"kubernetes.io/projected/9135eb5e-038c-4c74-8942-ae626209f23f-kube-api-access-9bfjj\") pod \"redhat-operators-9v5xd\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.108587 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.563096 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v5xd"] Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.873723 4760 generic.go:334] "Generic (PLEG): container finished" podID="9135eb5e-038c-4c74-8942-ae626209f23f" containerID="44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3" exitCode=0 Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.873765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v5xd" event={"ID":"9135eb5e-038c-4c74-8942-ae626209f23f","Type":"ContainerDied","Data":"44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3"} Nov 25 10:02:18 crc kubenswrapper[4760]: I1125 10:02:18.873805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v5xd" event={"ID":"9135eb5e-038c-4c74-8942-ae626209f23f","Type":"ContainerStarted","Data":"0a26448a7689c3b374ef187fbfcf9f7f29e96866cf27f68072dd2ed2f6f71212"} Nov 25 10:02:19 crc kubenswrapper[4760]: I1125 10:02:19.886755 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v5xd" event={"ID":"9135eb5e-038c-4c74-8942-ae626209f23f","Type":"ContainerStarted","Data":"13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb"} Nov 25 10:02:22 crc kubenswrapper[4760]: I1125 10:02:22.921076 4760 generic.go:334] "Generic (PLEG): container finished" podID="9135eb5e-038c-4c74-8942-ae626209f23f" containerID="13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb" exitCode=0 Nov 25 10:02:22 crc kubenswrapper[4760]: I1125 10:02:22.921172 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v5xd" event={"ID":"9135eb5e-038c-4c74-8942-ae626209f23f","Type":"ContainerDied","Data":"13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb"} Nov 25 10:02:23 crc kubenswrapper[4760]: I1125 10:02:23.933027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v5xd" event={"ID":"9135eb5e-038c-4c74-8942-ae626209f23f","Type":"ContainerStarted","Data":"ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169"} Nov 25 10:02:23 crc kubenswrapper[4760]: I1125 10:02:23.952677 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9v5xd" podStartSLOduration=2.31756672 podStartE2EDuration="6.952647654s" podCreationTimestamp="2025-11-25 10:02:17 +0000 UTC" firstStartedPulling="2025-11-25 10:02:18.875943336 +0000 UTC m=+6672.584974131" lastFinishedPulling="2025-11-25 10:02:23.51102424 +0000 UTC m=+6677.220055065" observedRunningTime="2025-11-25 10:02:23.950085131 +0000 UTC m=+6677.659115926" watchObservedRunningTime="2025-11-25 10:02:23.952647654 +0000 UTC m=+6677.661678479" Nov 25 10:02:28 crc kubenswrapper[4760]: I1125 10:02:28.108803 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:28 crc kubenswrapper[4760]: I1125 10:02:28.109306 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:29 crc kubenswrapper[4760]: I1125 10:02:29.152766 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9v5xd" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="registry-server" probeResult="failure" output=< Nov 25 10:02:29 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 10:02:29 crc kubenswrapper[4760]: > Nov 25 10:02:31 crc kubenswrapper[4760]: I1125 10:02:31.745807 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:02:31 crc kubenswrapper[4760]: I1125 10:02:31.746155 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:02:38 crc kubenswrapper[4760]: I1125 10:02:38.154234 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:38 crc kubenswrapper[4760]: I1125 10:02:38.203302 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:38 crc kubenswrapper[4760]: I1125 10:02:38.388613 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v5xd"] Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.084294 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9v5xd" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="registry-server" containerID="cri-o://ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169" gracePeriod=2 Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.570062 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.763074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-catalog-content\") pod \"9135eb5e-038c-4c74-8942-ae626209f23f\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.763543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bfjj\" (UniqueName: \"kubernetes.io/projected/9135eb5e-038c-4c74-8942-ae626209f23f-kube-api-access-9bfjj\") pod \"9135eb5e-038c-4c74-8942-ae626209f23f\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.763821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-utilities\") pod \"9135eb5e-038c-4c74-8942-ae626209f23f\" (UID: \"9135eb5e-038c-4c74-8942-ae626209f23f\") " Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.764535 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-utilities" (OuterVolumeSpecName: "utilities") pod "9135eb5e-038c-4c74-8942-ae626209f23f" (UID: "9135eb5e-038c-4c74-8942-ae626209f23f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.769757 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9135eb5e-038c-4c74-8942-ae626209f23f-kube-api-access-9bfjj" (OuterVolumeSpecName: "kube-api-access-9bfjj") pod "9135eb5e-038c-4c74-8942-ae626209f23f" (UID: "9135eb5e-038c-4c74-8942-ae626209f23f"). InnerVolumeSpecName "kube-api-access-9bfjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.866102 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.866138 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bfjj\" (UniqueName: \"kubernetes.io/projected/9135eb5e-038c-4c74-8942-ae626209f23f-kube-api-access-9bfjj\") on node \"crc\" DevicePath \"\"" Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.884342 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9135eb5e-038c-4c74-8942-ae626209f23f" (UID: "9135eb5e-038c-4c74-8942-ae626209f23f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:02:40 crc kubenswrapper[4760]: I1125 10:02:40.968511 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9135eb5e-038c-4c74-8942-ae626209f23f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.095455 4760 generic.go:334] "Generic (PLEG): container finished" podID="9135eb5e-038c-4c74-8942-ae626209f23f" containerID="ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169" exitCode=0 Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.095486 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v5xd" event={"ID":"9135eb5e-038c-4c74-8942-ae626209f23f","Type":"ContainerDied","Data":"ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169"} Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.095518 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v5xd" event={"ID":"9135eb5e-038c-4c74-8942-ae626209f23f","Type":"ContainerDied","Data":"0a26448a7689c3b374ef187fbfcf9f7f29e96866cf27f68072dd2ed2f6f71212"} Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.095535 4760 scope.go:117] "RemoveContainer" containerID="ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.095604 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v5xd" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.123914 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v5xd"] Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.128939 4760 scope.go:117] "RemoveContainer" containerID="13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.135886 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9v5xd"] Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.166449 4760 scope.go:117] "RemoveContainer" containerID="44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.224063 4760 scope.go:117] "RemoveContainer" containerID="ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169" Nov 25 10:02:41 crc kubenswrapper[4760]: E1125 10:02:41.225293 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169\": container with ID starting with ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169 not found: ID does not exist" containerID="ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.225352 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169"} err="failed to get container status \"ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169\": rpc error: code = NotFound desc = could not find container \"ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169\": container with ID starting with ad6f7a04a24c506991be8f1333878c692fce490cb12fcd12d170b1fae402c169 not found: ID does not exist" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.225378 4760 scope.go:117] "RemoveContainer" containerID="13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb" Nov 25 10:02:41 crc kubenswrapper[4760]: E1125 10:02:41.225848 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb\": container with ID starting with 13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb not found: ID does not exist" containerID="13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.225902 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb"} err="failed to get container status \"13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb\": rpc error: code = NotFound desc = could not find container \"13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb\": container with ID starting with 13a95e02fef33dc356146110072db4be4d168f1661b0b2b536f5d46f5a4d58eb not found: ID does not exist" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.225918 4760 scope.go:117] "RemoveContainer" containerID="44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3" Nov 25 10:02:41 crc kubenswrapper[4760]: E1125 10:02:41.226291 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3\": container with ID starting with 44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3 not found: ID does not exist" containerID="44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3" Nov 25 10:02:41 crc kubenswrapper[4760]: I1125 10:02:41.226346 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3"} err="failed to get container status \"44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3\": rpc error: code = NotFound desc = could not find container \"44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3\": container with ID starting with 44d9cacc2f98db057b4846dfe3711c31ddfe6b9010405188cae580bcf12327e3 not found: ID does not exist" Nov 25 10:02:42 crc kubenswrapper[4760]: I1125 10:02:42.951566 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" path="/var/lib/kubelet/pods/9135eb5e-038c-4c74-8942-ae626209f23f/volumes" Nov 25 10:03:01 crc kubenswrapper[4760]: I1125 10:03:01.746380 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:03:01 crc kubenswrapper[4760]: I1125 10:03:01.746991 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:03:01 crc kubenswrapper[4760]: I1125 10:03:01.747074 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 10:03:01 crc kubenswrapper[4760]: I1125 10:03:01.747952 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:03:01 crc kubenswrapper[4760]: I1125 10:03:01.748011 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" gracePeriod=600 Nov 25 10:03:01 crc kubenswrapper[4760]: E1125 10:03:01.882211 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:03:02 crc kubenswrapper[4760]: I1125 10:03:02.313677 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" exitCode=0 Nov 25 10:03:02 crc kubenswrapper[4760]: I1125 10:03:02.313714 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c"} Nov 25 10:03:02 crc kubenswrapper[4760]: I1125 10:03:02.314075 4760 scope.go:117] "RemoveContainer" containerID="dcc36475898ac36954c966e05b9522cab13d877d54bb8cd69956cf0ae84bf93b" Nov 25 10:03:02 crc kubenswrapper[4760]: I1125 10:03:02.314846 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:03:02 crc kubenswrapper[4760]: E1125 10:03:02.315179 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:03:14 crc kubenswrapper[4760]: I1125 10:03:14.938241 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:03:14 crc kubenswrapper[4760]: E1125 10:03:14.938939 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:03:15 crc kubenswrapper[4760]: I1125 10:03:15.980987 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 10:03:15 crc kubenswrapper[4760]: I1125 10:03:15.981791 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-plxrr" podUID="cef58941-ae6b-4624-af41-65ab598838eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 10:03:29 crc kubenswrapper[4760]: I1125 10:03:29.938057 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:03:29 crc kubenswrapper[4760]: E1125 10:03:29.938825 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:03:40 crc kubenswrapper[4760]: I1125 10:03:40.943237 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:03:40 crc kubenswrapper[4760]: E1125 10:03:40.944074 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:03:51 crc kubenswrapper[4760]: I1125 10:03:51.782518 4760 generic.go:334] "Generic (PLEG): container finished" podID="7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" containerID="cfc872377fce4f80091a2612f5d26d775e7f4c542e2ec9f60922ca1376d4b315" exitCode=0 Nov 25 10:03:51 crc kubenswrapper[4760]: I1125 10:03:51.782640 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e","Type":"ContainerDied","Data":"cfc872377fce4f80091a2612f5d26d775e7f4c542e2ec9f60922ca1376d4b315"} Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.271365 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.335550 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-workdir\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.335670 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-temporary\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.335783 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-config-data\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.335815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.335891 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz9zk\" (UniqueName: \"kubernetes.io/projected/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-kube-api-access-pz9zk\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.335914 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config-secret\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.335988 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.336032 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ceph\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.336065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ca-certs\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.336112 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ssh-key\") pod \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\" (UID: \"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e\") " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.337028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-config-data" (OuterVolumeSpecName: "config-data") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.338140 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.342942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.344152 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ceph" (OuterVolumeSpecName: "ceph") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.344409 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.346385 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-kube-api-access-pz9zk" (OuterVolumeSpecName: "kube-api-access-pz9zk") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "kube-api-access-pz9zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.372434 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.372456 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.373021 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.389499 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" (UID: "7e76e3b1-69e6-4498-b2f9-a52fdfe1650e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439458 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439499 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439513 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439524 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz9zk\" (UniqueName: \"kubernetes.io/projected/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-kube-api-access-pz9zk\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439533 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439566 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439575 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439585 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439595 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.439603 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e76e3b1-69e6-4498-b2f9-a52fdfe1650e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.458696 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.542208 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.804388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"7e76e3b1-69e6-4498-b2f9-a52fdfe1650e","Type":"ContainerDied","Data":"4b073015ef12eda9149e9b8e4bdeda747adb7aaf0289de0eea25dd85de7ef5a2"} Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.804459 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b073015ef12eda9149e9b8e4bdeda747adb7aaf0289de0eea25dd85de7ef5a2" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.805053 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 10:03:53 crc kubenswrapper[4760]: I1125 10:03:53.939701 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:03:53 crc kubenswrapper[4760]: E1125 10:03:53.940133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.880349 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 10:04:03 crc kubenswrapper[4760]: E1125 10:04:03.881434 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" containerName="tempest-tests-tempest-tests-runner" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.881452 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" containerName="tempest-tests-tempest-tests-runner" Nov 25 10:04:03 crc kubenswrapper[4760]: E1125 10:04:03.881488 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="extract-utilities" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.881505 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="extract-utilities" Nov 25 10:04:03 crc kubenswrapper[4760]: E1125 10:04:03.881534 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="extract-content" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.881542 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="extract-content" Nov 25 10:04:03 crc kubenswrapper[4760]: E1125 10:04:03.881568 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="registry-server" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.881575 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="registry-server" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.881804 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9135eb5e-038c-4c74-8942-ae626209f23f" containerName="registry-server" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.881842 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e76e3b1-69e6-4498-b2f9-a52fdfe1650e" containerName="tempest-tests-tempest-tests-runner" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.882656 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.885237 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gq598" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.894086 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.966485 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr9gc\" (UniqueName: \"kubernetes.io/projected/9d79e9ee-084d-41e7-9513-aaea8863e85d-kube-api-access-cr9gc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9d79e9ee-084d-41e7-9513-aaea8863e85d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:03 crc kubenswrapper[4760]: I1125 10:04:03.966647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9d79e9ee-084d-41e7-9513-aaea8863e85d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.067955 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr9gc\" (UniqueName: \"kubernetes.io/projected/9d79e9ee-084d-41e7-9513-aaea8863e85d-kube-api-access-cr9gc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9d79e9ee-084d-41e7-9513-aaea8863e85d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.068301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9d79e9ee-084d-41e7-9513-aaea8863e85d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.068895 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9d79e9ee-084d-41e7-9513-aaea8863e85d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.088010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr9gc\" (UniqueName: \"kubernetes.io/projected/9d79e9ee-084d-41e7-9513-aaea8863e85d-kube-api-access-cr9gc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9d79e9ee-084d-41e7-9513-aaea8863e85d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.102650 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9d79e9ee-084d-41e7-9513-aaea8863e85d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.207818 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 10:04:04 crc kubenswrapper[4760]: W1125 10:04:04.664154 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d79e9ee_084d_41e7_9513_aaea8863e85d.slice/crio-15baa227eec55a62ffec9ccbdfa94aaf7c2fdca27d3e610c1895343fc614baf7 WatchSource:0}: Error finding container 15baa227eec55a62ffec9ccbdfa94aaf7c2fdca27d3e610c1895343fc614baf7: Status 404 returned error can't find the container with id 15baa227eec55a62ffec9ccbdfa94aaf7c2fdca27d3e610c1895343fc614baf7 Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.664713 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 10:04:04 crc kubenswrapper[4760]: I1125 10:04:04.925846 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9d79e9ee-084d-41e7-9513-aaea8863e85d","Type":"ContainerStarted","Data":"15baa227eec55a62ffec9ccbdfa94aaf7c2fdca27d3e610c1895343fc614baf7"} Nov 25 10:04:05 crc kubenswrapper[4760]: I1125 10:04:05.938122 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9d79e9ee-084d-41e7-9513-aaea8863e85d","Type":"ContainerStarted","Data":"42d628862326f440df1671a914a34e386be886b2d7246bb1af4445ca40558aa7"} Nov 25 10:04:05 crc kubenswrapper[4760]: I1125 10:04:05.959679 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.147971154 podStartE2EDuration="2.959649586s" podCreationTimestamp="2025-11-25 10:04:03 +0000 UTC" firstStartedPulling="2025-11-25 10:04:04.668548751 +0000 UTC m=+6778.377579547" lastFinishedPulling="2025-11-25 10:04:05.480227184 +0000 UTC m=+6779.189257979" observedRunningTime="2025-11-25 10:04:05.954600112 +0000 UTC m=+6779.663630937" watchObservedRunningTime="2025-11-25 10:04:05.959649586 +0000 UTC m=+6779.668680411" Nov 25 10:04:07 crc kubenswrapper[4760]: I1125 10:04:07.939148 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:04:07 crc kubenswrapper[4760]: E1125 10:04:07.939784 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:04:22 crc kubenswrapper[4760]: I1125 10:04:22.938608 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:04:22 crc kubenswrapper[4760]: E1125 10:04:22.939399 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.843379 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.845217 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.847982 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.848000 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"tobiko-secret" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.848899 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-public-key" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.849103 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-private-key" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.849515 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-config" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.860942 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.918106 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.918179 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:25 crc kubenswrapper[4760]: I1125 10:04:25.918478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.021150 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.021238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv98q\" (UniqueName: \"kubernetes.io/projected/5a899175-c606-4361-8300-3c2ed82d823c-kube-api-access-tv98q\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.021423 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.021578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.021628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.021735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.021935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.022008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.022923 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.023005 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.023061 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.023113 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.023171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.023435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.030884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.124701 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.124783 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.124877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.124910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv98q\" (UniqueName: \"kubernetes.io/projected/5a899175-c606-4361-8300-3c2ed82d823c-kube-api-access-tv98q\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.124962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.125016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.125080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.125121 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.125157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.125364 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.125814 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.126239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.126675 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.126703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.132609 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.132903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.132726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.141742 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv98q\" (UniqueName: \"kubernetes.io/projected/5a899175-c606-4361-8300-3c2ed82d823c-kube-api-access-tv98q\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.158966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.177744 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:04:26 crc kubenswrapper[4760]: I1125 10:04:26.703887 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Nov 25 10:04:26 crc kubenswrapper[4760]: W1125 10:04:26.716565 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a899175_c606_4361_8300_3c2ed82d823c.slice/crio-75a871907593b7e6be8d50a0bad4c275b3519c380c31240923bae0aff37de947 WatchSource:0}: Error finding container 75a871907593b7e6be8d50a0bad4c275b3519c380c31240923bae0aff37de947: Status 404 returned error can't find the container with id 75a871907593b7e6be8d50a0bad4c275b3519c380c31240923bae0aff37de947 Nov 25 10:04:27 crc kubenswrapper[4760]: I1125 10:04:27.370229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"5a899175-c606-4361-8300-3c2ed82d823c","Type":"ContainerStarted","Data":"75a871907593b7e6be8d50a0bad4c275b3519c380c31240923bae0aff37de947"} Nov 25 10:04:34 crc kubenswrapper[4760]: I1125 10:04:34.939595 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:04:34 crc kubenswrapper[4760]: E1125 10:04:34.940508 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:04:42 crc kubenswrapper[4760]: I1125 10:04:42.533628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"5a899175-c606-4361-8300-3c2ed82d823c","Type":"ContainerStarted","Data":"1f4903d76081fd9ac336dfe399abece2f525ca69fbeee7b0c6c1bdf56854d961"} Nov 25 10:04:42 crc kubenswrapper[4760]: I1125 10:04:42.556098 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podStartSLOduration=3.686300169 podStartE2EDuration="18.556078854s" podCreationTimestamp="2025-11-25 10:04:24 +0000 UTC" firstStartedPulling="2025-11-25 10:04:26.718916506 +0000 UTC m=+6800.427947301" lastFinishedPulling="2025-11-25 10:04:41.588695191 +0000 UTC m=+6815.297725986" observedRunningTime="2025-11-25 10:04:42.550371931 +0000 UTC m=+6816.259402726" watchObservedRunningTime="2025-11-25 10:04:42.556078854 +0000 UTC m=+6816.265109649" Nov 25 10:04:46 crc kubenswrapper[4760]: I1125 10:04:46.945582 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:04:46 crc kubenswrapper[4760]: E1125 10:04:46.946457 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:05:00 crc kubenswrapper[4760]: I1125 10:05:00.942281 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:05:00 crc kubenswrapper[4760]: E1125 10:05:00.943140 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:05:05 crc kubenswrapper[4760]: I1125 10:05:05.921052 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdk6f"] Nov 25 10:05:05 crc kubenswrapper[4760]: I1125 10:05:05.924490 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:05 crc kubenswrapper[4760]: I1125 10:05:05.938424 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdk6f"] Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.108415 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-catalog-content\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.108509 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqgf\" (UniqueName: \"kubernetes.io/projected/5d435dd3-5b89-4573-9845-d661f8130300-kube-api-access-rzqgf\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.108767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-utilities\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.210829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-utilities\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.211231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-catalog-content\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.211367 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqgf\" (UniqueName: \"kubernetes.io/projected/5d435dd3-5b89-4573-9845-d661f8130300-kube-api-access-rzqgf\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.211364 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-utilities\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.211703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-catalog-content\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.232169 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqgf\" (UniqueName: \"kubernetes.io/projected/5d435dd3-5b89-4573-9845-d661f8130300-kube-api-access-rzqgf\") pod \"community-operators-sdk6f\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:06 crc kubenswrapper[4760]: I1125 10:05:06.247372 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:09 crc kubenswrapper[4760]: I1125 10:05:09.930953 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdk6f"] Nov 25 10:05:09 crc kubenswrapper[4760]: W1125 10:05:09.933203 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d435dd3_5b89_4573_9845_d661f8130300.slice/crio-ff41fee4074dfb19738230b460e98846f9a91313812e6459797d0c98c92f1936 WatchSource:0}: Error finding container ff41fee4074dfb19738230b460e98846f9a91313812e6459797d0c98c92f1936: Status 404 returned error can't find the container with id ff41fee4074dfb19738230b460e98846f9a91313812e6459797d0c98c92f1936 Nov 25 10:05:10 crc kubenswrapper[4760]: I1125 10:05:10.818653 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d435dd3-5b89-4573-9845-d661f8130300" containerID="5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673" exitCode=0 Nov 25 10:05:10 crc kubenswrapper[4760]: I1125 10:05:10.818718 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdk6f" event={"ID":"5d435dd3-5b89-4573-9845-d661f8130300","Type":"ContainerDied","Data":"5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673"} Nov 25 10:05:10 crc kubenswrapper[4760]: I1125 10:05:10.818997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdk6f" event={"ID":"5d435dd3-5b89-4573-9845-d661f8130300","Type":"ContainerStarted","Data":"ff41fee4074dfb19738230b460e98846f9a91313812e6459797d0c98c92f1936"} Nov 25 10:05:11 crc kubenswrapper[4760]: I1125 10:05:11.829347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdk6f" event={"ID":"5d435dd3-5b89-4573-9845-d661f8130300","Type":"ContainerStarted","Data":"c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1"} Nov 25 10:05:11 crc kubenswrapper[4760]: I1125 10:05:11.937962 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:05:11 crc kubenswrapper[4760]: E1125 10:05:11.941852 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:05:12 crc kubenswrapper[4760]: I1125 10:05:12.839369 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d435dd3-5b89-4573-9845-d661f8130300" containerID="c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1" exitCode=0 Nov 25 10:05:12 crc kubenswrapper[4760]: I1125 10:05:12.839473 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdk6f" event={"ID":"5d435dd3-5b89-4573-9845-d661f8130300","Type":"ContainerDied","Data":"c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1"} Nov 25 10:05:13 crc kubenswrapper[4760]: I1125 10:05:13.853097 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdk6f" event={"ID":"5d435dd3-5b89-4573-9845-d661f8130300","Type":"ContainerStarted","Data":"49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f"} Nov 25 10:05:13 crc kubenswrapper[4760]: I1125 10:05:13.880902 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sdk6f" podStartSLOduration=6.226216643 podStartE2EDuration="8.880872009s" podCreationTimestamp="2025-11-25 10:05:05 +0000 UTC" firstStartedPulling="2025-11-25 10:05:10.820472455 +0000 UTC m=+6844.529503260" lastFinishedPulling="2025-11-25 10:05:13.475127831 +0000 UTC m=+6847.184158626" observedRunningTime="2025-11-25 10:05:13.870608566 +0000 UTC m=+6847.579639381" watchObservedRunningTime="2025-11-25 10:05:13.880872009 +0000 UTC m=+6847.589902834" Nov 25 10:05:16 crc kubenswrapper[4760]: I1125 10:05:16.247605 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:16 crc kubenswrapper[4760]: I1125 10:05:16.249179 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:16 crc kubenswrapper[4760]: I1125 10:05:16.301161 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:25 crc kubenswrapper[4760]: I1125 10:05:25.939094 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:05:25 crc kubenswrapper[4760]: E1125 10:05:25.948639 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:05:26 crc kubenswrapper[4760]: I1125 10:05:26.293127 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:26 crc kubenswrapper[4760]: I1125 10:05:26.341827 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdk6f"] Nov 25 10:05:26 crc kubenswrapper[4760]: I1125 10:05:26.977641 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdk6f" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="registry-server" containerID="cri-o://49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f" gracePeriod=2 Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.451342 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.595688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqgf\" (UniqueName: \"kubernetes.io/projected/5d435dd3-5b89-4573-9845-d661f8130300-kube-api-access-rzqgf\") pod \"5d435dd3-5b89-4573-9845-d661f8130300\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.595801 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-utilities\") pod \"5d435dd3-5b89-4573-9845-d661f8130300\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.595983 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-catalog-content\") pod \"5d435dd3-5b89-4573-9845-d661f8130300\" (UID: \"5d435dd3-5b89-4573-9845-d661f8130300\") " Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.596833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-utilities" (OuterVolumeSpecName: "utilities") pod "5d435dd3-5b89-4573-9845-d661f8130300" (UID: "5d435dd3-5b89-4573-9845-d661f8130300"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.602356 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d435dd3-5b89-4573-9845-d661f8130300-kube-api-access-rzqgf" (OuterVolumeSpecName: "kube-api-access-rzqgf") pod "5d435dd3-5b89-4573-9845-d661f8130300" (UID: "5d435dd3-5b89-4573-9845-d661f8130300"). InnerVolumeSpecName "kube-api-access-rzqgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.641270 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d435dd3-5b89-4573-9845-d661f8130300" (UID: "5d435dd3-5b89-4573-9845-d661f8130300"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.698411 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzqgf\" (UniqueName: \"kubernetes.io/projected/5d435dd3-5b89-4573-9845-d661f8130300-kube-api-access-rzqgf\") on node \"crc\" DevicePath \"\"" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.698442 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.698453 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d435dd3-5b89-4573-9845-d661f8130300-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.987839 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d435dd3-5b89-4573-9845-d661f8130300" containerID="49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f" exitCode=0 Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.987909 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdk6f" Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.987919 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdk6f" event={"ID":"5d435dd3-5b89-4573-9845-d661f8130300","Type":"ContainerDied","Data":"49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f"} Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.988329 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdk6f" event={"ID":"5d435dd3-5b89-4573-9845-d661f8130300","Type":"ContainerDied","Data":"ff41fee4074dfb19738230b460e98846f9a91313812e6459797d0c98c92f1936"} Nov 25 10:05:27 crc kubenswrapper[4760]: I1125 10:05:27.988355 4760 scope.go:117] "RemoveContainer" containerID="49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.007455 4760 scope.go:117] "RemoveContainer" containerID="c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.030925 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdk6f"] Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.032950 4760 scope.go:117] "RemoveContainer" containerID="5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.040737 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdk6f"] Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.073927 4760 scope.go:117] "RemoveContainer" containerID="49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f" Nov 25 10:05:28 crc kubenswrapper[4760]: E1125 10:05:28.074314 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f\": container with ID starting with 49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f not found: ID does not exist" containerID="49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.074347 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f"} err="failed to get container status \"49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f\": rpc error: code = NotFound desc = could not find container \"49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f\": container with ID starting with 49bfb4e47ede0a42a3a18edb75f2e074ed07e4adafe951180344da25d9ec294f not found: ID does not exist" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.074371 4760 scope.go:117] "RemoveContainer" containerID="c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1" Nov 25 10:05:28 crc kubenswrapper[4760]: E1125 10:05:28.074805 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1\": container with ID starting with c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1 not found: ID does not exist" containerID="c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.074861 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1"} err="failed to get container status \"c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1\": rpc error: code = NotFound desc = could not find container \"c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1\": container with ID starting with c2cd3a17cc59fc827473e212c1f19106c71ceae5f3fa0c02b996dab101fae4d1 not found: ID does not exist" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.074897 4760 scope.go:117] "RemoveContainer" containerID="5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673" Nov 25 10:05:28 crc kubenswrapper[4760]: E1125 10:05:28.075212 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673\": container with ID starting with 5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673 not found: ID does not exist" containerID="5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.075241 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673"} err="failed to get container status \"5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673\": rpc error: code = NotFound desc = could not find container \"5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673\": container with ID starting with 5b8d02d728ccfb26541b720c1242953b857ebea86a84f53437511d5e230c2673 not found: ID does not exist" Nov 25 10:05:28 crc kubenswrapper[4760]: I1125 10:05:28.949803 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d435dd3-5b89-4573-9845-d661f8130300" path="/var/lib/kubelet/pods/5d435dd3-5b89-4573-9845-d661f8130300/volumes" Nov 25 10:05:36 crc kubenswrapper[4760]: I1125 10:05:36.945912 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:05:36 crc kubenswrapper[4760]: E1125 10:05:36.948603 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:05:50 crc kubenswrapper[4760]: I1125 10:05:50.939330 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:05:50 crc kubenswrapper[4760]: E1125 10:05:50.940230 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:06:03 crc kubenswrapper[4760]: I1125 10:06:03.939070 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:06:03 crc kubenswrapper[4760]: E1125 10:06:03.939931 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:06:17 crc kubenswrapper[4760]: I1125 10:06:17.939591 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:06:17 crc kubenswrapper[4760]: E1125 10:06:17.940375 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:06:27 crc kubenswrapper[4760]: I1125 10:06:27.516868 4760 generic.go:334] "Generic (PLEG): container finished" podID="5a899175-c606-4361-8300-3c2ed82d823c" containerID="1f4903d76081fd9ac336dfe399abece2f525ca69fbeee7b0c6c1bdf56854d961" exitCode=0 Nov 25 10:06:27 crc kubenswrapper[4760]: I1125 10:06:27.516938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"5a899175-c606-4361-8300-3c2ed82d823c","Type":"ContainerDied","Data":"1f4903d76081fd9ac336dfe399abece2f525ca69fbeee7b0c6c1bdf56854d961"} Nov 25 10:06:28 crc kubenswrapper[4760]: I1125 10:06:28.946538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.025612 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Nov 25 10:06:29 crc kubenswrapper[4760]: E1125 10:06:29.026136 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="registry-server" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.026152 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="registry-server" Nov 25 10:06:29 crc kubenswrapper[4760]: E1125 10:06:29.026176 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="extract-utilities" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.026183 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="extract-utilities" Nov 25 10:06:29 crc kubenswrapper[4760]: E1125 10:06:29.026220 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a899175-c606-4361-8300-3c2ed82d823c" containerName="tobiko-tests-tobiko" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.026227 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a899175-c606-4361-8300-3c2ed82d823c" containerName="tobiko-tests-tobiko" Nov 25 10:06:29 crc kubenswrapper[4760]: E1125 10:06:29.026235 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="extract-content" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.026242 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="extract-content" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.026474 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d435dd3-5b89-4573-9845-d661f8130300" containerName="registry-server" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.026505 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a899175-c606-4361-8300-3c2ed82d823c" containerName="tobiko-tests-tobiko" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.027220 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.036802 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117646 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-clouds-config\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-public-key\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117788 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-private-key\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-temporary\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117882 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-openstack-config-secret\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117910 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-workdir\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117942 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ca-certs\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.117964 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-kubeconfig\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118010 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-config\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118026 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ceph\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv98q\" (UniqueName: \"kubernetes.io/projected/5a899175-c606-4361-8300-3c2ed82d823c-kube-api-access-tv98q\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118178 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"5a899175-c606-4361-8300-3c2ed82d823c\" (UID: \"5a899175-c606-4361-8300-3c2ed82d823c\") " Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118535 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118580 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118611 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118779 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gm5g\" (UniqueName: \"kubernetes.io/projected/8c968840-fcc2-4c11-baed-7477dfe970d2-kube-api-access-5gm5g\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.118975 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.119012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.119120 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.119217 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.119311 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.119351 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.119718 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.126411 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a899175-c606-4361-8300-3c2ed82d823c-kube-api-access-tv98q" (OuterVolumeSpecName: "kube-api-access-tv98q") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "kube-api-access-tv98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.127689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ceph" (OuterVolumeSpecName: "ceph") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.143274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.151364 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.155947 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.159421 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.160557 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.164561 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.180291 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.184230 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221313 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221520 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221540 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gm5g\" (UniqueName: \"kubernetes.io/projected/8c968840-fcc2-4c11-baed-7477dfe970d2-kube-api-access-5gm5g\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221694 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221705 4760 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221714 4760 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221722 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221734 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221743 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221751 4760 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-kubeconfig\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221760 4760 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5a899175-c606-4361-8300-3c2ed82d823c-tobiko-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221769 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a899175-c606-4361-8300-3c2ed82d823c-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.221778 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv98q\" (UniqueName: \"kubernetes.io/projected/5a899175-c606-4361-8300-3c2ed82d823c-kube-api-access-tv98q\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.222965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.223326 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.223552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.224004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.224832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.226689 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.231697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.233450 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.236815 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.237752 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.249575 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gm5g\" (UniqueName: \"kubernetes.io/projected/8c968840-fcc2-4c11-baed-7477dfe970d2-kube-api-access-5gm5g\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.272438 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.354480 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.548725 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"5a899175-c606-4361-8300-3c2ed82d823c","Type":"ContainerDied","Data":"75a871907593b7e6be8d50a0bad4c275b3519c380c31240923bae0aff37de947"} Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.549062 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a871907593b7e6be8d50a0bad4c275b3519c380c31240923bae0aff37de947" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.549131 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 10:06:29 crc kubenswrapper[4760]: I1125 10:06:29.880093 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Nov 25 10:06:30 crc kubenswrapper[4760]: I1125 10:06:30.262371 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5a899175-c606-4361-8300-3c2ed82d823c" (UID: "5a899175-c606-4361-8300-3c2ed82d823c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:06:30 crc kubenswrapper[4760]: I1125 10:06:30.354068 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5a899175-c606-4361-8300-3c2ed82d823c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 10:06:30 crc kubenswrapper[4760]: I1125 10:06:30.557693 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"8c968840-fcc2-4c11-baed-7477dfe970d2","Type":"ContainerStarted","Data":"a3ed5fef72d5258d232e56f542c9190d9f340f980cb52a090a7c23d1a82a3d74"} Nov 25 10:06:30 crc kubenswrapper[4760]: I1125 10:06:30.938564 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:06:30 crc kubenswrapper[4760]: E1125 10:06:30.938991 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:06:31 crc kubenswrapper[4760]: I1125 10:06:31.568678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"8c968840-fcc2-4c11-baed-7477dfe970d2","Type":"ContainerStarted","Data":"3ca03f0f3871127c468f5203dd3821e04334f79c07a43d0f38d75b0ab1c1aa05"} Nov 25 10:06:31 crc kubenswrapper[4760]: I1125 10:06:31.590399 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s01-sanity" podStartSLOduration=3.590378817 podStartE2EDuration="3.590378817s" podCreationTimestamp="2025-11-25 10:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:06:31.586124595 +0000 UTC m=+6925.295155390" watchObservedRunningTime="2025-11-25 10:06:31.590378817 +0000 UTC m=+6925.299409612" Nov 25 10:06:43 crc kubenswrapper[4760]: I1125 10:06:43.938530 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:06:43 crc kubenswrapper[4760]: E1125 10:06:43.939220 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:06:58 crc kubenswrapper[4760]: I1125 10:06:58.939243 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:06:58 crc kubenswrapper[4760]: E1125 10:06:58.940595 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:07:09 crc kubenswrapper[4760]: I1125 10:07:09.939197 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:07:09 crc kubenswrapper[4760]: E1125 10:07:09.940242 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:07:20 crc kubenswrapper[4760]: I1125 10:07:20.939381 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:07:20 crc kubenswrapper[4760]: E1125 10:07:20.940193 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:07:31 crc kubenswrapper[4760]: I1125 10:07:31.938903 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:07:31 crc kubenswrapper[4760]: E1125 10:07:31.939682 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:07:46 crc kubenswrapper[4760]: I1125 10:07:46.946857 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:07:46 crc kubenswrapper[4760]: E1125 10:07:46.947994 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:07:58 crc kubenswrapper[4760]: I1125 10:07:58.940873 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:07:58 crc kubenswrapper[4760]: E1125 10:07:58.941519 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:08:10 crc kubenswrapper[4760]: I1125 10:08:10.939078 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:08:11 crc kubenswrapper[4760]: I1125 10:08:11.555566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"97bc83681ec651ceba1b9d3f3554238617eb62f9df1e07f0a88e8966aea91621"} Nov 25 10:09:03 crc kubenswrapper[4760]: I1125 10:09:03.063869 4760 generic.go:334] "Generic (PLEG): container finished" podID="8c968840-fcc2-4c11-baed-7477dfe970d2" containerID="3ca03f0f3871127c468f5203dd3821e04334f79c07a43d0f38d75b0ab1c1aa05" exitCode=0 Nov 25 10:09:03 crc kubenswrapper[4760]: I1125 10:09:03.064012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"8c968840-fcc2-4c11-baed-7477dfe970d2","Type":"ContainerDied","Data":"3ca03f0f3871127c468f5203dd3821e04334f79c07a43d0f38d75b0ab1c1aa05"} Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.535977 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.668877 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gm5g\" (UniqueName: \"kubernetes.io/projected/8c968840-fcc2-4c11-baed-7477dfe970d2-kube-api-access-5gm5g\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-openstack-config-secret\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-private-key\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669294 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-workdir\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669363 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-config\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ceph\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669548 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-temporary\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669606 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669651 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-clouds-config\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669752 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ca-certs\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669785 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-public-key\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.669817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-kubeconfig\") pod \"8c968840-fcc2-4c11-baed-7477dfe970d2\" (UID: \"8c968840-fcc2-4c11-baed-7477dfe970d2\") " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.670918 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.676160 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c968840-fcc2-4c11-baed-7477dfe970d2-kube-api-access-5gm5g" (OuterVolumeSpecName: "kube-api-access-5gm5g") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "kube-api-access-5gm5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.685171 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.695332 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ceph" (OuterVolumeSpecName: "ceph") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.706199 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.706769 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.717864 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.722635 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.731412 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.738890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.750983 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773076 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773118 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773132 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773143 4760 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773153 4760 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-kubeconfig\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773170 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gm5g\" (UniqueName: \"kubernetes.io/projected/8c968840-fcc2-4c11-baed-7477dfe970d2-kube-api-access-5gm5g\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773179 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773187 4760 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773195 4760 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8c968840-fcc2-4c11-baed-7477dfe970d2-tobiko-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773203 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c968840-fcc2-4c11-baed-7477dfe970d2-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.773213 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.796699 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 25 10:09:04 crc kubenswrapper[4760]: I1125 10:09:04.874838 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:05 crc kubenswrapper[4760]: I1125 10:09:05.086238 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"8c968840-fcc2-4c11-baed-7477dfe970d2","Type":"ContainerDied","Data":"a3ed5fef72d5258d232e56f542c9190d9f340f980cb52a090a7c23d1a82a3d74"} Nov 25 10:09:05 crc kubenswrapper[4760]: I1125 10:09:05.086292 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ed5fef72d5258d232e56f542c9190d9f340f980cb52a090a7c23d1a82a3d74" Nov 25 10:09:05 crc kubenswrapper[4760]: I1125 10:09:05.086347 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 10:09:05 crc kubenswrapper[4760]: I1125 10:09:05.899580 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "8c968840-fcc2-4c11-baed-7477dfe970d2" (UID: "8c968840-fcc2-4c11-baed-7477dfe970d2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:09:05 crc kubenswrapper[4760]: I1125 10:09:05.912411 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c968840-fcc2-4c11-baed-7477dfe970d2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.020607 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Nov 25 10:09:11 crc kubenswrapper[4760]: E1125 10:09:11.021743 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c968840-fcc2-4c11-baed-7477dfe970d2" containerName="tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.021758 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c968840-fcc2-4c11-baed-7477dfe970d2" containerName="tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.021999 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c968840-fcc2-4c11-baed-7477dfe970d2" containerName="tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.022840 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.046512 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.126439 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kkr8\" (UniqueName: \"kubernetes.io/projected/c1a8f236-1676-4e0e-9395-8500fda5eba2-kube-api-access-8kkr8\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c1a8f236-1676-4e0e-9395-8500fda5eba2\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.126563 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c1a8f236-1676-4e0e-9395-8500fda5eba2\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.228928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kkr8\" (UniqueName: \"kubernetes.io/projected/c1a8f236-1676-4e0e-9395-8500fda5eba2-kube-api-access-8kkr8\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c1a8f236-1676-4e0e-9395-8500fda5eba2\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.229064 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c1a8f236-1676-4e0e-9395-8500fda5eba2\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.229845 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c1a8f236-1676-4e0e-9395-8500fda5eba2\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.261674 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kkr8\" (UniqueName: \"kubernetes.io/projected/c1a8f236-1676-4e0e-9395-8500fda5eba2-kube-api-access-8kkr8\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c1a8f236-1676-4e0e-9395-8500fda5eba2\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.289271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c1a8f236-1676-4e0e-9395-8500fda5eba2\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.350594 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.614463 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Nov 25 10:09:11 crc kubenswrapper[4760]: I1125 10:09:11.621187 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:09:12 crc kubenswrapper[4760]: I1125 10:09:12.145092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"c1a8f236-1676-4e0e-9395-8500fda5eba2","Type":"ContainerStarted","Data":"7bf652124e8b7216ba8708acce6f0b852f7960874d3fbad45c81dd354ef4f8ba"} Nov 25 10:09:13 crc kubenswrapper[4760]: I1125 10:09:13.155732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"c1a8f236-1676-4e0e-9395-8500fda5eba2","Type":"ContainerStarted","Data":"e316e00e8aacb1550908083266d8bb9422a77ac8c3a718a1402688fb3f1d4622"} Nov 25 10:09:13 crc kubenswrapper[4760]: I1125 10:09:13.184116 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" podStartSLOduration=2.354821405 podStartE2EDuration="3.18408158s" podCreationTimestamp="2025-11-25 10:09:10 +0000 UTC" firstStartedPulling="2025-11-25 10:09:11.620961377 +0000 UTC m=+7085.329992172" lastFinishedPulling="2025-11-25 10:09:12.450221552 +0000 UTC m=+7086.159252347" observedRunningTime="2025-11-25 10:09:13.171278634 +0000 UTC m=+7086.880309449" watchObservedRunningTime="2025-11-25 10:09:13.18408158 +0000 UTC m=+7086.893112395" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.545794 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ansibletest-ansibletest"] Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.547850 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.550673 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.550872 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.561878 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.578561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ceph\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.578648 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.578679 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.578699 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6prdm\" (UniqueName: \"kubernetes.io/projected/5fd9b990-91a9-4529-a951-15647544f5ec-kube-api-access-6prdm\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.578718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.578957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.579024 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.579124 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.579227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.579404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.680754 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.680806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.680847 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.680888 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.680977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.681016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ceph\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.681051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.681080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.681106 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.681126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6prdm\" (UniqueName: \"kubernetes.io/projected/5fd9b990-91a9-4529-a951-15647544f5ec-kube-api-access-6prdm\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.681284 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.681967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.682301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.682367 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.690131 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.690381 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.690452 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.694964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.696083 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ceph\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.698419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6prdm\" (UniqueName: \"kubernetes.io/projected/5fd9b990-91a9-4529-a951-15647544f5ec-kube-api-access-6prdm\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.717505 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ansibletest-ansibletest\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " pod="openstack/ansibletest-ansibletest" Nov 25 10:09:27 crc kubenswrapper[4760]: I1125 10:09:27.876588 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 10:09:28 crc kubenswrapper[4760]: I1125 10:09:28.324925 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Nov 25 10:09:28 crc kubenswrapper[4760]: W1125 10:09:28.328059 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b990_91a9_4529_a951_15647544f5ec.slice/crio-5ee9b282f90c18d0ecf94a261ecaaabaee004a49d89b3f15f18caa783af848d6 WatchSource:0}: Error finding container 5ee9b282f90c18d0ecf94a261ecaaabaee004a49d89b3f15f18caa783af848d6: Status 404 returned error can't find the container with id 5ee9b282f90c18d0ecf94a261ecaaabaee004a49d89b3f15f18caa783af848d6 Nov 25 10:09:29 crc kubenswrapper[4760]: I1125 10:09:29.319747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"5fd9b990-91a9-4529-a951-15647544f5ec","Type":"ContainerStarted","Data":"5ee9b282f90c18d0ecf94a261ecaaabaee004a49d89b3f15f18caa783af848d6"} Nov 25 10:09:44 crc kubenswrapper[4760]: E1125 10:09:44.737809 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified" Nov 25 10:09:44 crc kubenswrapper[4760]: E1125 10:09:44.738579 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 25 10:09:44 crc kubenswrapper[4760]: container &Container{Name:ansibletest-ansibletest,Image:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_ANSIBLE_EXTRA_VARS,Value:-e manual_run=false,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_FILE_EXTRA_VARS,Value:--- Nov 25 10:09:44 crc kubenswrapper[4760]: foo: bar Nov 25 10:09:44 crc kubenswrapper[4760]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_BRANCH,Value:,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_REPO,Value:https://github.com/ansible/test-playbooks,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_INVENTORY,Value:localhost ansible_connection=local ansible_python_interpreter=python3 Nov 25 10:09:44 crc kubenswrapper[4760]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_PLAYBOOK,Value:./debug.yml,ValueFrom:nil,},EnvVar{Name:POD_DEBUG,Value:false,ValueFrom:nil,},EnvVar{Name:POD_INSTALL_COLLECTIONS,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/ansible,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/AnsibleTests/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/ansible/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/var/lib/ansible/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:workload-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/test_keypair.key,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:compute-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/.ssh/compute_id,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6prdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*227,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*227,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ansibletest-ansibletest_openstack(5fd9b990-91a9-4529-a951-15647544f5ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 25 10:09:44 crc kubenswrapper[4760]: > logger="UnhandledError" Nov 25 10:09:44 crc kubenswrapper[4760]: E1125 10:09:44.739837 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ansibletest-ansibletest" podUID="5fd9b990-91a9-4529-a951-15647544f5ec" Nov 25 10:09:45 crc kubenswrapper[4760]: E1125 10:09:45.472720 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified\\\"\"" pod="openstack/ansibletest-ansibletest" podUID="5fd9b990-91a9-4529-a951-15647544f5ec" Nov 25 10:10:02 crc kubenswrapper[4760]: I1125 10:10:02.647134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"5fd9b990-91a9-4529-a951-15647544f5ec","Type":"ContainerStarted","Data":"545fdbd22f9f2923bfa50d36fbe7d3217609cce0693040e8466ba8fb43147859"} Nov 25 10:10:02 crc kubenswrapper[4760]: I1125 10:10:02.666789 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ansibletest-ansibletest" podStartSLOduration=3.4312286260000002 podStartE2EDuration="36.66676837s" podCreationTimestamp="2025-11-25 10:09:26 +0000 UTC" firstStartedPulling="2025-11-25 10:09:28.331087967 +0000 UTC m=+7102.040118762" lastFinishedPulling="2025-11-25 10:10:01.566627711 +0000 UTC m=+7135.275658506" observedRunningTime="2025-11-25 10:10:02.665541075 +0000 UTC m=+7136.374571890" watchObservedRunningTime="2025-11-25 10:10:02.66676837 +0000 UTC m=+7136.375799165" Nov 25 10:10:04 crc kubenswrapper[4760]: I1125 10:10:04.666180 4760 generic.go:334] "Generic (PLEG): container finished" podID="5fd9b990-91a9-4529-a951-15647544f5ec" containerID="545fdbd22f9f2923bfa50d36fbe7d3217609cce0693040e8466ba8fb43147859" exitCode=0 Nov 25 10:10:04 crc kubenswrapper[4760]: I1125 10:10:04.666237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"5fd9b990-91a9-4529-a951-15647544f5ec","Type":"ContainerDied","Data":"545fdbd22f9f2923bfa50d36fbe7d3217609cce0693040e8466ba8fb43147859"} Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.028215 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.145859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ca-certs\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.145905 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-temporary\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.145942 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ceph\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6prdm\" (UniqueName: \"kubernetes.io/projected/5fd9b990-91a9-4529-a951-15647544f5ec-kube-api-access-6prdm\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146118 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-workload-ssh-secret\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-compute-ssh-secret\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146296 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146354 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config-secret\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146403 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-workdir\") pod \"5fd9b990-91a9-4529-a951-15647544f5ec\" (UID: \"5fd9b990-91a9-4529-a951-15647544f5ec\") " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.146573 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.147392 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.152205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.154541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd9b990-91a9-4529-a951-15647544f5ec-kube-api-access-6prdm" (OuterVolumeSpecName: "kube-api-access-6prdm") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "kube-api-access-6prdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.159136 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.163805 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ceph" (OuterVolumeSpecName: "ceph") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.182468 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-workload-ssh-secret" (OuterVolumeSpecName: "workload-ssh-secret") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "workload-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.188998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.199810 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-compute-ssh-secret" (OuterVolumeSpecName: "compute-ssh-secret") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "compute-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.209948 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.212600 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5fd9b990-91a9-4529-a951-15647544f5ec" (UID: "5fd9b990-91a9-4529-a951-15647544f5ec"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.249888 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6prdm\" (UniqueName: \"kubernetes.io/projected/5fd9b990-91a9-4529-a951-15647544f5ec-kube-api-access-6prdm\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.249952 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.249965 4760 reconciler_common.go:293] "Volume detached for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-workload-ssh-secret\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.249980 4760 reconciler_common.go:293] "Volume detached for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-compute-ssh-secret\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.250079 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.250096 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.250109 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5fd9b990-91a9-4529-a951-15647544f5ec-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.250122 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.250133 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fd9b990-91a9-4529-a951-15647544f5ec-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.279843 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.352464 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.688747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"5fd9b990-91a9-4529-a951-15647544f5ec","Type":"ContainerDied","Data":"5ee9b282f90c18d0ecf94a261ecaaabaee004a49d89b3f15f18caa783af848d6"} Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.688795 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee9b282f90c18d0ecf94a261ecaaabaee004a49d89b3f15f18caa783af848d6" Nov 25 10:10:06 crc kubenswrapper[4760]: I1125 10:10:06.688868 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.511778 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Nov 25 10:10:13 crc kubenswrapper[4760]: E1125 10:10:13.512825 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd9b990-91a9-4529-a951-15647544f5ec" containerName="ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.512844 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd9b990-91a9-4529-a951-15647544f5ec" containerName="ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.513078 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd9b990-91a9-4529-a951-15647544f5ec" containerName="ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.513858 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.523817 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.611641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnx7\" (UniqueName: \"kubernetes.io/projected/3d62b634-2cf7-42e7-b5d4-3791056b146a-kube-api-access-rmnx7\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3d62b634-2cf7-42e7-b5d4-3791056b146a\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.611756 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3d62b634-2cf7-42e7-b5d4-3791056b146a\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.713325 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmnx7\" (UniqueName: \"kubernetes.io/projected/3d62b634-2cf7-42e7-b5d4-3791056b146a-kube-api-access-rmnx7\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3d62b634-2cf7-42e7-b5d4-3791056b146a\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.713421 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3d62b634-2cf7-42e7-b5d4-3791056b146a\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.713878 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3d62b634-2cf7-42e7-b5d4-3791056b146a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.737837 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmnx7\" (UniqueName: \"kubernetes.io/projected/3d62b634-2cf7-42e7-b5d4-3791056b146a-kube-api-access-rmnx7\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3d62b634-2cf7-42e7-b5d4-3791056b146a\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.744073 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"3d62b634-2cf7-42e7-b5d4-3791056b146a\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:13 crc kubenswrapper[4760]: I1125 10:10:13.852371 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 10:10:14 crc kubenswrapper[4760]: I1125 10:10:14.333891 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Nov 25 10:10:14 crc kubenswrapper[4760]: I1125 10:10:14.775332 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"3d62b634-2cf7-42e7-b5d4-3791056b146a","Type":"ContainerStarted","Data":"4aef615c5747b5ee03af262a8154a84bb8f3093eb08718d1ae0eec495035d366"} Nov 25 10:10:23 crc kubenswrapper[4760]: I1125 10:10:23.877236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"3d62b634-2cf7-42e7-b5d4-3791056b146a","Type":"ContainerStarted","Data":"47f0ccbcad6332bd680d0ace9446f2bdc20f57488e990af5a73fb382a3bed6c0"} Nov 25 10:10:23 crc kubenswrapper[4760]: I1125 10:10:23.903881 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" podStartSLOduration=2.403350316 podStartE2EDuration="10.903856529s" podCreationTimestamp="2025-11-25 10:10:13 +0000 UTC" firstStartedPulling="2025-11-25 10:10:14.369980856 +0000 UTC m=+7148.079011651" lastFinishedPulling="2025-11-25 10:10:22.870487069 +0000 UTC m=+7156.579517864" observedRunningTime="2025-11-25 10:10:23.896297133 +0000 UTC m=+7157.605327968" watchObservedRunningTime="2025-11-25 10:10:23.903856529 +0000 UTC m=+7157.612887324" Nov 25 10:10:31 crc kubenswrapper[4760]: I1125 10:10:31.746882 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:10:31 crc kubenswrapper[4760]: I1125 10:10:31.747476 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.510051 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizontest-tests-horizontest"] Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.512820 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.514648 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizontest-tests-horizontesthorizontest-config" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.515683 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.528114 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.565423 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.565572 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667583 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgtc\" (UniqueName: \"kubernetes.io/projected/aa57ea6c-4740-4010-a3d6-a0e070615d40-kube-api-access-2kgtc\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667900 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.667997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.669300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.675580 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.769394 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.769447 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.769490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.769521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.769575 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgtc\" (UniqueName: \"kubernetes.io/projected/aa57ea6c-4740-4010-a3d6-a0e070615d40-kube-api-access-2kgtc\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.769598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.769965 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.770555 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.770744 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.776589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.779888 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.790068 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgtc\" (UniqueName: \"kubernetes.io/projected/aa57ea6c-4740-4010-a3d6-a0e070615d40-kube-api-access-2kgtc\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.800956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:38 crc kubenswrapper[4760]: I1125 10:10:38.844002 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 10:10:41 crc kubenswrapper[4760]: I1125 10:10:39.132934 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Nov 25 10:10:41 crc kubenswrapper[4760]: I1125 10:10:40.041687 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"aa57ea6c-4740-4010-a3d6-a0e070615d40","Type":"ContainerStarted","Data":"ac91a9739adaf2bd0bc32253be5b6c46d170281d6623e4128b9a71a29ea77797"} Nov 25 10:11:01 crc kubenswrapper[4760]: I1125 10:11:01.746726 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:11:01 crc kubenswrapper[4760]: I1125 10:11:01.747431 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:11:05 crc kubenswrapper[4760]: E1125 10:11:05.920206 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizontest:current-podified" Nov 25 10:11:05 crc kubenswrapper[4760]: E1125 10:11:05.921773 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizontest-tests-horizontest,Image:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADMIN_PASSWORD,Value:12345678,ValueFrom:nil,},EnvVar{Name:ADMIN_USERNAME,Value:admin,ValueFrom:nil,},EnvVar{Name:AUTH_URL,Value:https://keystone-public-openstack.apps-crc.testing,ValueFrom:nil,},EnvVar{Name:DASHBOARD_URL,Value:https://horizon-openstack.apps-crc.testing/,ValueFrom:nil,},EnvVar{Name:EXTRA_FLAG,Value:not pagination and test_users.py,ValueFrom:nil,},EnvVar{Name:FLAVOR_NAME,Value:m1.tiny,ValueFrom:nil,},EnvVar{Name:HORIZONTEST_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:HORIZON_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:HORIZON_LOGS_DIR_NAME,Value:horizon,ValueFrom:nil,},EnvVar{Name:HORIZON_REPO_BRANCH,Value:master,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE,Value:/var/lib/horizontest/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE_NAME,Value:cirros-0.6.2-x86_64-disk,ValueFrom:nil,},EnvVar{Name:IMAGE_URL,Value:http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:PASSWORD,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME_XPATH,Value://*[@class=\"context-project\"]//ancestor::ul,ValueFrom:nil,},EnvVar{Name:REPO_URL,Value:https://review.opendev.org/openstack/horizon,ValueFrom:nil,},EnvVar{Name:USER_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/horizontest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/horizontest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/horizontest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kgtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42455,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42455,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizontest-tests-horizontest_openstack(aa57ea6c-4740-4010-a3d6-a0e070615d40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 10:11:05 crc kubenswrapper[4760]: E1125 10:11:05.923041 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizontest-tests-horizontest" podUID="aa57ea6c-4740-4010-a3d6-a0e070615d40" Nov 25 10:11:06 crc kubenswrapper[4760]: E1125 10:11:06.378561 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizontest:current-podified\\\"\"" pod="openstack/horizontest-tests-horizontest" podUID="aa57ea6c-4740-4010-a3d6-a0e070615d40" Nov 25 10:11:26 crc kubenswrapper[4760]: I1125 10:11:26.621191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"aa57ea6c-4740-4010-a3d6-a0e070615d40","Type":"ContainerStarted","Data":"99f04da8b2770b7705cab0ffba4b3beb8ed033730473341e2727a3eea3082d3b"} Nov 25 10:11:26 crc kubenswrapper[4760]: I1125 10:11:26.653338 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizontest-tests-horizontest" podStartSLOduration=4.073850093 podStartE2EDuration="49.653318515s" podCreationTimestamp="2025-11-25 10:10:37 +0000 UTC" firstStartedPulling="2025-11-25 10:10:39.138700906 +0000 UTC m=+7172.847731701" lastFinishedPulling="2025-11-25 10:11:24.718169328 +0000 UTC m=+7218.427200123" observedRunningTime="2025-11-25 10:11:26.647140469 +0000 UTC m=+7220.356171264" watchObservedRunningTime="2025-11-25 10:11:26.653318515 +0000 UTC m=+7220.362349310" Nov 25 10:11:31 crc kubenswrapper[4760]: I1125 10:11:31.748472 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:11:31 crc kubenswrapper[4760]: I1125 10:11:31.749066 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:11:31 crc kubenswrapper[4760]: I1125 10:11:31.749117 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 10:11:31 crc kubenswrapper[4760]: I1125 10:11:31.749989 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97bc83681ec651ceba1b9d3f3554238617eb62f9df1e07f0a88e8966aea91621"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:11:31 crc kubenswrapper[4760]: I1125 10:11:31.750044 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://97bc83681ec651ceba1b9d3f3554238617eb62f9df1e07f0a88e8966aea91621" gracePeriod=600 Nov 25 10:11:32 crc kubenswrapper[4760]: I1125 10:11:32.687494 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="97bc83681ec651ceba1b9d3f3554238617eb62f9df1e07f0a88e8966aea91621" exitCode=0 Nov 25 10:11:32 crc kubenswrapper[4760]: I1125 10:11:32.687572 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"97bc83681ec651ceba1b9d3f3554238617eb62f9df1e07f0a88e8966aea91621"} Nov 25 10:11:32 crc kubenswrapper[4760]: I1125 10:11:32.687896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98"} Nov 25 10:11:32 crc kubenswrapper[4760]: I1125 10:11:32.687933 4760 scope.go:117] "RemoveContainer" containerID="ac50a9fb8de89933936bb74b49fcd16915a73ea7ee60bd8d242acba96fb05b7c" Nov 25 10:11:52 crc kubenswrapper[4760]: I1125 10:11:52.894961 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9tq24"] Nov 25 10:11:52 crc kubenswrapper[4760]: I1125 10:11:52.898649 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:52 crc kubenswrapper[4760]: I1125 10:11:52.912876 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tq24"] Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.054757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27rl\" (UniqueName: \"kubernetes.io/projected/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-kube-api-access-f27rl\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.055241 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-utilities\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.055527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-catalog-content\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.158346 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-catalog-content\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.158718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27rl\" (UniqueName: \"kubernetes.io/projected/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-kube-api-access-f27rl\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.158810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-utilities\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.158962 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-catalog-content\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.159782 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-utilities\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.188106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27rl\" (UniqueName: \"kubernetes.io/projected/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-kube-api-access-f27rl\") pod \"certified-operators-9tq24\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.222099 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.843400 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tq24"] Nov 25 10:11:53 crc kubenswrapper[4760]: I1125 10:11:53.939701 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tq24" event={"ID":"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0","Type":"ContainerStarted","Data":"4862f0690e54f7f273a1fef3313c138c6ccf8a846d42df610327e57a49800617"} Nov 25 10:11:54 crc kubenswrapper[4760]: I1125 10:11:54.962726 4760 generic.go:334] "Generic (PLEG): container finished" podID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerID="fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad" exitCode=0 Nov 25 10:11:54 crc kubenswrapper[4760]: I1125 10:11:54.967000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tq24" event={"ID":"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0","Type":"ContainerDied","Data":"fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad"} Nov 25 10:11:56 crc kubenswrapper[4760]: I1125 10:11:56.985748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tq24" event={"ID":"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0","Type":"ContainerStarted","Data":"bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3"} Nov 25 10:12:00 crc kubenswrapper[4760]: I1125 10:12:00.021985 4760 generic.go:334] "Generic (PLEG): container finished" podID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerID="bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3" exitCode=0 Nov 25 10:12:00 crc kubenswrapper[4760]: I1125 10:12:00.022128 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tq24" event={"ID":"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0","Type":"ContainerDied","Data":"bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3"} Nov 25 10:12:01 crc kubenswrapper[4760]: I1125 10:12:01.033522 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tq24" event={"ID":"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0","Type":"ContainerStarted","Data":"709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe"} Nov 25 10:12:01 crc kubenswrapper[4760]: I1125 10:12:01.061081 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9tq24" podStartSLOduration=3.421667632 podStartE2EDuration="9.061052359s" podCreationTimestamp="2025-11-25 10:11:52 +0000 UTC" firstStartedPulling="2025-11-25 10:11:54.965512903 +0000 UTC m=+7248.674543698" lastFinishedPulling="2025-11-25 10:12:00.60489762 +0000 UTC m=+7254.313928425" observedRunningTime="2025-11-25 10:12:01.058197818 +0000 UTC m=+7254.767228633" watchObservedRunningTime="2025-11-25 10:12:01.061052359 +0000 UTC m=+7254.770083174" Nov 25 10:12:03 crc kubenswrapper[4760]: I1125 10:12:03.222767 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:12:03 crc kubenswrapper[4760]: I1125 10:12:03.223777 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:12:03 crc kubenswrapper[4760]: I1125 10:12:03.278143 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:12:13 crc kubenswrapper[4760]: I1125 10:12:13.279714 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:12:13 crc kubenswrapper[4760]: I1125 10:12:13.367617 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tq24"] Nov 25 10:12:14 crc kubenswrapper[4760]: I1125 10:12:14.335181 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9tq24" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="registry-server" containerID="cri-o://709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe" gracePeriod=2 Nov 25 10:12:14 crc kubenswrapper[4760]: I1125 10:12:14.837703 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:12:14 crc kubenswrapper[4760]: I1125 10:12:14.937139 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f27rl\" (UniqueName: \"kubernetes.io/projected/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-kube-api-access-f27rl\") pod \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " Nov 25 10:12:14 crc kubenswrapper[4760]: I1125 10:12:14.937675 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-catalog-content\") pod \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " Nov 25 10:12:14 crc kubenswrapper[4760]: I1125 10:12:14.937945 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-utilities\") pod \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\" (UID: \"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0\") " Nov 25 10:12:14 crc kubenswrapper[4760]: I1125 10:12:14.938926 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-utilities" (OuterVolumeSpecName: "utilities") pod "62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" (UID: "62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:12:14 crc kubenswrapper[4760]: I1125 10:12:14.958704 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-kube-api-access-f27rl" (OuterVolumeSpecName: "kube-api-access-f27rl") pod "62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" (UID: "62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0"). InnerVolumeSpecName "kube-api-access-f27rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.006276 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" (UID: "62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.040230 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f27rl\" (UniqueName: \"kubernetes.io/projected/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-kube-api-access-f27rl\") on node \"crc\" DevicePath \"\"" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.040307 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.040319 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.350007 4760 generic.go:334] "Generic (PLEG): container finished" podID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerID="709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe" exitCode=0 Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.350117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tq24" event={"ID":"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0","Type":"ContainerDied","Data":"709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe"} Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.350157 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tq24" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.351389 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tq24" event={"ID":"62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0","Type":"ContainerDied","Data":"4862f0690e54f7f273a1fef3313c138c6ccf8a846d42df610327e57a49800617"} Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.351507 4760 scope.go:117] "RemoveContainer" containerID="709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.392653 4760 scope.go:117] "RemoveContainer" containerID="bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.393365 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tq24"] Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.409147 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9tq24"] Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.418769 4760 scope.go:117] "RemoveContainer" containerID="fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.478510 4760 scope.go:117] "RemoveContainer" containerID="709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe" Nov 25 10:12:15 crc kubenswrapper[4760]: E1125 10:12:15.479135 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe\": container with ID starting with 709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe not found: ID does not exist" containerID="709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.479183 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe"} err="failed to get container status \"709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe\": rpc error: code = NotFound desc = could not find container \"709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe\": container with ID starting with 709ed209410332a54b7576622897f270b8bde688b253cace6449986015bacbfe not found: ID does not exist" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.479212 4760 scope.go:117] "RemoveContainer" containerID="bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3" Nov 25 10:12:15 crc kubenswrapper[4760]: E1125 10:12:15.479755 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3\": container with ID starting with bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3 not found: ID does not exist" containerID="bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.479789 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3"} err="failed to get container status \"bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3\": rpc error: code = NotFound desc = could not find container \"bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3\": container with ID starting with bc687c5996c5e1d08f7575c9915a1d5001e3cc07bb2a357d21be618b3b24bea3 not found: ID does not exist" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.479813 4760 scope.go:117] "RemoveContainer" containerID="fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad" Nov 25 10:12:15 crc kubenswrapper[4760]: E1125 10:12:15.481045 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad\": container with ID starting with fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad not found: ID does not exist" containerID="fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad" Nov 25 10:12:15 crc kubenswrapper[4760]: I1125 10:12:15.481122 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad"} err="failed to get container status \"fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad\": rpc error: code = NotFound desc = could not find container \"fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad\": container with ID starting with fbb79bb90856d87ea33bf28ba4438228c2ddea79742771204d29476d829eefad not found: ID does not exist" Nov 25 10:12:16 crc kubenswrapper[4760]: I1125 10:12:16.954924 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" path="/var/lib/kubelet/pods/62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0/volumes" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.841774 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55lmc"] Nov 25 10:13:10 crc kubenswrapper[4760]: E1125 10:13:10.843938 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="extract-content" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.845473 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="extract-content" Nov 25 10:13:10 crc kubenswrapper[4760]: E1125 10:13:10.845564 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="extract-utilities" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.845639 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="extract-utilities" Nov 25 10:13:10 crc kubenswrapper[4760]: E1125 10:13:10.845709 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="registry-server" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.846370 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="registry-server" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.846849 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="62fd0f5b-9f6d-40ad-b885-4f9a0e759bd0" containerName="registry-server" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.848684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.851395 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55lmc"] Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.960834 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-utilities\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.961072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6dqg\" (UniqueName: \"kubernetes.io/projected/cfba230b-368e-4dda-9fda-cdf3962a72af-kube-api-access-m6dqg\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:10 crc kubenswrapper[4760]: I1125 10:13:10.961109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-catalog-content\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.063620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6dqg\" (UniqueName: \"kubernetes.io/projected/cfba230b-368e-4dda-9fda-cdf3962a72af-kube-api-access-m6dqg\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.063716 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-catalog-content\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.064662 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-utilities\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.064689 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-catalog-content\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.064988 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-utilities\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.094372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6dqg\" (UniqueName: \"kubernetes.io/projected/cfba230b-368e-4dda-9fda-cdf3962a72af-kube-api-access-m6dqg\") pod \"redhat-operators-55lmc\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.174841 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.675341 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55lmc"] Nov 25 10:13:11 crc kubenswrapper[4760]: W1125 10:13:11.721416 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfba230b_368e_4dda_9fda_cdf3962a72af.slice/crio-ce0b560aa08bbf6f1701eec964a4bcdf3c73a96a5b133c2cd6d729a8ee6963aa WatchSource:0}: Error finding container ce0b560aa08bbf6f1701eec964a4bcdf3c73a96a5b133c2cd6d729a8ee6963aa: Status 404 returned error can't find the container with id ce0b560aa08bbf6f1701eec964a4bcdf3c73a96a5b133c2cd6d729a8ee6963aa Nov 25 10:13:11 crc kubenswrapper[4760]: I1125 10:13:11.934597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55lmc" event={"ID":"cfba230b-368e-4dda-9fda-cdf3962a72af","Type":"ContainerStarted","Data":"ce0b560aa08bbf6f1701eec964a4bcdf3c73a96a5b133c2cd6d729a8ee6963aa"} Nov 25 10:13:12 crc kubenswrapper[4760]: I1125 10:13:12.965616 4760 generic.go:334] "Generic (PLEG): container finished" podID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerID="d14c5317bdcb8d5a88a0385d08218933491493104ff3bf98860670d4dfba4506" exitCode=0 Nov 25 10:13:13 crc kubenswrapper[4760]: I1125 10:13:13.088524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55lmc" event={"ID":"cfba230b-368e-4dda-9fda-cdf3962a72af","Type":"ContainerDied","Data":"d14c5317bdcb8d5a88a0385d08218933491493104ff3bf98860670d4dfba4506"} Nov 25 10:13:15 crc kubenswrapper[4760]: I1125 10:13:15.991589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55lmc" event={"ID":"cfba230b-368e-4dda-9fda-cdf3962a72af","Type":"ContainerStarted","Data":"012868981bb5ec47d18750a9fc7923154065c467e3ed13faaa1f2b988f858509"} Nov 25 10:13:24 crc kubenswrapper[4760]: I1125 10:13:24.065547 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa57ea6c-4740-4010-a3d6-a0e070615d40" containerID="99f04da8b2770b7705cab0ffba4b3beb8ed033730473341e2727a3eea3082d3b" exitCode=0 Nov 25 10:13:24 crc kubenswrapper[4760]: I1125 10:13:24.065647 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"aa57ea6c-4740-4010-a3d6-a0e070615d40","Type":"ContainerDied","Data":"99f04da8b2770b7705cab0ffba4b3beb8ed033730473341e2727a3eea3082d3b"} Nov 25 10:13:24 crc kubenswrapper[4760]: I1125 10:13:24.071540 4760 generic.go:334] "Generic (PLEG): container finished" podID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerID="012868981bb5ec47d18750a9fc7923154065c467e3ed13faaa1f2b988f858509" exitCode=0 Nov 25 10:13:24 crc kubenswrapper[4760]: I1125 10:13:24.071593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55lmc" event={"ID":"cfba230b-368e-4dda-9fda-cdf3962a72af","Type":"ContainerDied","Data":"012868981bb5ec47d18750a9fc7923154065c467e3ed13faaa1f2b988f858509"} Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.469474 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574341 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574455 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-workdir\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574567 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kgtc\" (UniqueName: \"kubernetes.io/projected/aa57ea6c-4740-4010-a3d6-a0e070615d40-kube-api-access-2kgtc\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574716 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ceph\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574799 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-temporary\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ca-certs\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574889 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-openstack-config-secret\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.574951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-clouds-config\") pod \"aa57ea6c-4740-4010-a3d6-a0e070615d40\" (UID: \"aa57ea6c-4740-4010-a3d6-a0e070615d40\") " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.575968 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.582988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.583521 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ceph" (OuterVolumeSpecName: "ceph") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.585966 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa57ea6c-4740-4010-a3d6-a0e070615d40-kube-api-access-2kgtc" (OuterVolumeSpecName: "kube-api-access-2kgtc") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "kube-api-access-2kgtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.607819 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.627003 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.636047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.677134 4760 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.677175 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.677186 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.677198 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa57ea6c-4740-4010-a3d6-a0e070615d40-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.677209 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.677238 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.677263 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kgtc\" (UniqueName: \"kubernetes.io/projected/aa57ea6c-4740-4010-a3d6-a0e070615d40-kube-api-access-2kgtc\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.707131 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.779102 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.783593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "aa57ea6c-4740-4010-a3d6-a0e070615d40" (UID: "aa57ea6c-4740-4010-a3d6-a0e070615d40"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:13:25 crc kubenswrapper[4760]: I1125 10:13:25.881647 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/aa57ea6c-4740-4010-a3d6-a0e070615d40-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:26 crc kubenswrapper[4760]: I1125 10:13:26.092806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"aa57ea6c-4740-4010-a3d6-a0e070615d40","Type":"ContainerDied","Data":"ac91a9739adaf2bd0bc32253be5b6c46d170281d6623e4128b9a71a29ea77797"} Nov 25 10:13:26 crc kubenswrapper[4760]: I1125 10:13:26.092845 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac91a9739adaf2bd0bc32253be5b6c46d170281d6623e4128b9a71a29ea77797" Nov 25 10:13:26 crc kubenswrapper[4760]: I1125 10:13:26.092879 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 10:13:28 crc kubenswrapper[4760]: I1125 10:13:28.110926 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55lmc" event={"ID":"cfba230b-368e-4dda-9fda-cdf3962a72af","Type":"ContainerStarted","Data":"411960a29a7fc0197c4649da95d4c597b32f89deee22cf35e798fc92ae7c7e97"} Nov 25 10:13:28 crc kubenswrapper[4760]: I1125 10:13:28.133325 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55lmc" podStartSLOduration=3.962508598 podStartE2EDuration="18.133307093s" podCreationTimestamp="2025-11-25 10:13:10 +0000 UTC" firstStartedPulling="2025-11-25 10:13:12.96866346 +0000 UTC m=+7326.677694245" lastFinishedPulling="2025-11-25 10:13:27.139461945 +0000 UTC m=+7340.848492740" observedRunningTime="2025-11-25 10:13:28.126027125 +0000 UTC m=+7341.835057960" watchObservedRunningTime="2025-11-25 10:13:28.133307093 +0000 UTC m=+7341.842337888" Nov 25 10:13:31 crc kubenswrapper[4760]: I1125 10:13:31.174943 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:31 crc kubenswrapper[4760]: I1125 10:13:31.175290 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:32 crc kubenswrapper[4760]: I1125 10:13:32.237724 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-55lmc" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="registry-server" probeResult="failure" output=< Nov 25 10:13:32 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 10:13:32 crc kubenswrapper[4760]: > Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.114768 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Nov 25 10:13:34 crc kubenswrapper[4760]: E1125 10:13:34.115613 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa57ea6c-4740-4010-a3d6-a0e070615d40" containerName="horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.115634 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa57ea6c-4740-4010-a3d6-a0e070615d40" containerName="horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.115848 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa57ea6c-4740-4010-a3d6-a0e070615d40" containerName="horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.116617 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.127795 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.245204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzm4l\" (UniqueName: \"kubernetes.io/projected/9b073dce-d4e1-4018-bfe6-f0a54597f116-kube-api-access-gzm4l\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9b073dce-d4e1-4018-bfe6-f0a54597f116\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.245293 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9b073dce-d4e1-4018-bfe6-f0a54597f116\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.346955 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzm4l\" (UniqueName: \"kubernetes.io/projected/9b073dce-d4e1-4018-bfe6-f0a54597f116-kube-api-access-gzm4l\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9b073dce-d4e1-4018-bfe6-f0a54597f116\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.347011 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9b073dce-d4e1-4018-bfe6-f0a54597f116\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.347544 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9b073dce-d4e1-4018-bfe6-f0a54597f116\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.375791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzm4l\" (UniqueName: \"kubernetes.io/projected/9b073dce-d4e1-4018-bfe6-f0a54597f116-kube-api-access-gzm4l\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9b073dce-d4e1-4018-bfe6-f0a54597f116\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.378726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"9b073dce-d4e1-4018-bfe6-f0a54597f116\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.436766 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 10:13:34 crc kubenswrapper[4760]: E1125 10:13:34.437007 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:13:34 crc kubenswrapper[4760]: I1125 10:13:34.895933 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Nov 25 10:13:34 crc kubenswrapper[4760]: W1125 10:13:34.896140 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b073dce_d4e1_4018_bfe6_f0a54597f116.slice/crio-c130c2d589acbd054210cfdaf6b51a6f0016cda4673212e62edea582500fe0c8 WatchSource:0}: Error finding container c130c2d589acbd054210cfdaf6b51a6f0016cda4673212e62edea582500fe0c8: Status 404 returned error can't find the container with id c130c2d589acbd054210cfdaf6b51a6f0016cda4673212e62edea582500fe0c8 Nov 25 10:13:34 crc kubenswrapper[4760]: E1125 10:13:34.904629 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:13:35 crc kubenswrapper[4760]: I1125 10:13:35.225020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"9b073dce-d4e1-4018-bfe6-f0a54597f116","Type":"ContainerStarted","Data":"c130c2d589acbd054210cfdaf6b51a6f0016cda4673212e62edea582500fe0c8"} Nov 25 10:13:36 crc kubenswrapper[4760]: E1125 10:13:36.752392 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:13:38 crc kubenswrapper[4760]: I1125 10:13:38.255586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"9b073dce-d4e1-4018-bfe6-f0a54597f116","Type":"ContainerStarted","Data":"2b964e0755df662dbb35c53467bb0e57df2cb6ac5242df0e9fd8229ba5ecd3ea"} Nov 25 10:13:38 crc kubenswrapper[4760]: E1125 10:13:38.256447 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:13:38 crc kubenswrapper[4760]: I1125 10:13:38.274901 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" podStartSLOduration=2.42918459 podStartE2EDuration="4.2748786s" podCreationTimestamp="2025-11-25 10:13:34 +0000 UTC" firstStartedPulling="2025-11-25 10:13:34.906200898 +0000 UTC m=+7348.615231703" lastFinishedPulling="2025-11-25 10:13:36.751894908 +0000 UTC m=+7350.460925713" observedRunningTime="2025-11-25 10:13:38.267282524 +0000 UTC m=+7351.976313339" watchObservedRunningTime="2025-11-25 10:13:38.2748786 +0000 UTC m=+7351.983909395" Nov 25 10:13:39 crc kubenswrapper[4760]: E1125 10:13:39.268709 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:13:41 crc kubenswrapper[4760]: I1125 10:13:41.226266 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:41 crc kubenswrapper[4760]: I1125 10:13:41.276094 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:42 crc kubenswrapper[4760]: I1125 10:13:42.041823 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55lmc"] Nov 25 10:13:42 crc kubenswrapper[4760]: I1125 10:13:42.292724 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55lmc" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="registry-server" containerID="cri-o://411960a29a7fc0197c4649da95d4c597b32f89deee22cf35e798fc92ae7c7e97" gracePeriod=2 Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.316145 4760 generic.go:334] "Generic (PLEG): container finished" podID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerID="411960a29a7fc0197c4649da95d4c597b32f89deee22cf35e798fc92ae7c7e97" exitCode=0 Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.316446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55lmc" event={"ID":"cfba230b-368e-4dda-9fda-cdf3962a72af","Type":"ContainerDied","Data":"411960a29a7fc0197c4649da95d4c597b32f89deee22cf35e798fc92ae7c7e97"} Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.555614 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.673455 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-catalog-content\") pod \"cfba230b-368e-4dda-9fda-cdf3962a72af\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.673789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-utilities\") pod \"cfba230b-368e-4dda-9fda-cdf3962a72af\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.673821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6dqg\" (UniqueName: \"kubernetes.io/projected/cfba230b-368e-4dda-9fda-cdf3962a72af-kube-api-access-m6dqg\") pod \"cfba230b-368e-4dda-9fda-cdf3962a72af\" (UID: \"cfba230b-368e-4dda-9fda-cdf3962a72af\") " Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.675056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-utilities" (OuterVolumeSpecName: "utilities") pod "cfba230b-368e-4dda-9fda-cdf3962a72af" (UID: "cfba230b-368e-4dda-9fda-cdf3962a72af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.684850 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfba230b-368e-4dda-9fda-cdf3962a72af-kube-api-access-m6dqg" (OuterVolumeSpecName: "kube-api-access-m6dqg") pod "cfba230b-368e-4dda-9fda-cdf3962a72af" (UID: "cfba230b-368e-4dda-9fda-cdf3962a72af"). InnerVolumeSpecName "kube-api-access-m6dqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.775796 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.775834 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6dqg\" (UniqueName: \"kubernetes.io/projected/cfba230b-368e-4dda-9fda-cdf3962a72af-kube-api-access-m6dqg\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.776162 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfba230b-368e-4dda-9fda-cdf3962a72af" (UID: "cfba230b-368e-4dda-9fda-cdf3962a72af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:13:43 crc kubenswrapper[4760]: I1125 10:13:43.877428 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfba230b-368e-4dda-9fda-cdf3962a72af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.343535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55lmc" event={"ID":"cfba230b-368e-4dda-9fda-cdf3962a72af","Type":"ContainerDied","Data":"ce0b560aa08bbf6f1701eec964a4bcdf3c73a96a5b133c2cd6d729a8ee6963aa"} Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.343808 4760 scope.go:117] "RemoveContainer" containerID="411960a29a7fc0197c4649da95d4c597b32f89deee22cf35e798fc92ae7c7e97" Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.343977 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55lmc" Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.367980 4760 scope.go:117] "RemoveContainer" containerID="012868981bb5ec47d18750a9fc7923154065c467e3ed13faaa1f2b988f858509" Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.388778 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55lmc"] Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.396430 4760 scope.go:117] "RemoveContainer" containerID="d14c5317bdcb8d5a88a0385d08218933491493104ff3bf98860670d4dfba4506" Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.398143 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55lmc"] Nov 25 10:13:44 crc kubenswrapper[4760]: I1125 10:13:44.950027 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" path="/var/lib/kubelet/pods/cfba230b-368e-4dda-9fda-cdf3962a72af/volumes" Nov 25 10:14:01 crc kubenswrapper[4760]: I1125 10:14:01.746080 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:14:01 crc kubenswrapper[4760]: I1125 10:14:01.746657 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.769263 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8nbl/must-gather-92n9p"] Nov 25 10:14:05 crc kubenswrapper[4760]: E1125 10:14:05.770996 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="extract-content" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.771092 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="extract-content" Nov 25 10:14:05 crc kubenswrapper[4760]: E1125 10:14:05.771180 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="registry-server" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.771272 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="registry-server" Nov 25 10:14:05 crc kubenswrapper[4760]: E1125 10:14:05.771383 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="extract-utilities" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.771447 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="extract-utilities" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.771734 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfba230b-368e-4dda-9fda-cdf3962a72af" containerName="registry-server" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.773012 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.775371 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x8nbl"/"kube-root-ca.crt" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.775557 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x8nbl"/"default-dockercfg-7gl9l" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.776580 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x8nbl"/"openshift-service-ca.crt" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.783655 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x8nbl/must-gather-92n9p"] Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.868863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x7sx\" (UniqueName: \"kubernetes.io/projected/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-kube-api-access-7x7sx\") pod \"must-gather-92n9p\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.869355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-must-gather-output\") pod \"must-gather-92n9p\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.971964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x7sx\" (UniqueName: \"kubernetes.io/projected/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-kube-api-access-7x7sx\") pod \"must-gather-92n9p\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.972141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-must-gather-output\") pod \"must-gather-92n9p\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:05 crc kubenswrapper[4760]: I1125 10:14:05.972819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-must-gather-output\") pod \"must-gather-92n9p\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:06 crc kubenswrapper[4760]: I1125 10:14:06.003607 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x7sx\" (UniqueName: \"kubernetes.io/projected/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-kube-api-access-7x7sx\") pod \"must-gather-92n9p\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:06 crc kubenswrapper[4760]: I1125 10:14:06.091334 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:14:06 crc kubenswrapper[4760]: I1125 10:14:06.591773 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x8nbl/must-gather-92n9p"] Nov 25 10:14:07 crc kubenswrapper[4760]: I1125 10:14:07.562944 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/must-gather-92n9p" event={"ID":"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1","Type":"ContainerStarted","Data":"c4007027276f89c65c07f5c66e7a79cee18501b531a335b012f93d80cbe90e60"} Nov 25 10:14:13 crc kubenswrapper[4760]: I1125 10:14:13.618174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/must-gather-92n9p" event={"ID":"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1","Type":"ContainerStarted","Data":"b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633"} Nov 25 10:14:14 crc kubenswrapper[4760]: I1125 10:14:14.631486 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/must-gather-92n9p" event={"ID":"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1","Type":"ContainerStarted","Data":"e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48"} Nov 25 10:14:14 crc kubenswrapper[4760]: I1125 10:14:14.645016 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x8nbl/must-gather-92n9p" podStartSLOduration=3.156284222 podStartE2EDuration="9.644997576s" podCreationTimestamp="2025-11-25 10:14:05 +0000 UTC" firstStartedPulling="2025-11-25 10:14:06.594377752 +0000 UTC m=+7380.303408537" lastFinishedPulling="2025-11-25 10:14:13.083091096 +0000 UTC m=+7386.792121891" observedRunningTime="2025-11-25 10:14:14.643881464 +0000 UTC m=+7388.352912259" watchObservedRunningTime="2025-11-25 10:14:14.644997576 +0000 UTC m=+7388.354028381" Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.684331 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-6vkh8"] Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.686485 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.754373 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7685v\" (UniqueName: \"kubernetes.io/projected/041a93d0-ab8a-4cd6-9466-28e25cf986fc-kube-api-access-7685v\") pod \"crc-debug-6vkh8\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.754650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/041a93d0-ab8a-4cd6-9466-28e25cf986fc-host\") pod \"crc-debug-6vkh8\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.856982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7685v\" (UniqueName: \"kubernetes.io/projected/041a93d0-ab8a-4cd6-9466-28e25cf986fc-kube-api-access-7685v\") pod \"crc-debug-6vkh8\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.857745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/041a93d0-ab8a-4cd6-9466-28e25cf986fc-host\") pod \"crc-debug-6vkh8\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.857867 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/041a93d0-ab8a-4cd6-9466-28e25cf986fc-host\") pod \"crc-debug-6vkh8\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:23 crc kubenswrapper[4760]: I1125 10:14:23.883828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7685v\" (UniqueName: \"kubernetes.io/projected/041a93d0-ab8a-4cd6-9466-28e25cf986fc-kube-api-access-7685v\") pod \"crc-debug-6vkh8\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:24 crc kubenswrapper[4760]: I1125 10:14:24.009403 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:14:24 crc kubenswrapper[4760]: I1125 10:14:24.060341 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:14:24 crc kubenswrapper[4760]: I1125 10:14:24.734057 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" event={"ID":"041a93d0-ab8a-4cd6-9466-28e25cf986fc","Type":"ContainerStarted","Data":"bf2af1e44d39dc92cddc7292cb93d62bacd87bec5f2214880c439b1e6ef10424"} Nov 25 10:14:31 crc kubenswrapper[4760]: I1125 10:14:31.746098 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:14:31 crc kubenswrapper[4760]: I1125 10:14:31.746676 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:14:37 crc kubenswrapper[4760]: I1125 10:14:37.865934 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" event={"ID":"041a93d0-ab8a-4cd6-9466-28e25cf986fc","Type":"ContainerStarted","Data":"6f85cfd5345326e4740f9ef7e72114cfcb1ec734cccd4adde5be48a5d6ce6224"} Nov 25 10:14:37 crc kubenswrapper[4760]: I1125 10:14:37.884703 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" podStartSLOduration=2.240601255 podStartE2EDuration="14.884681192s" podCreationTimestamp="2025-11-25 10:14:23 +0000 UTC" firstStartedPulling="2025-11-25 10:14:24.060097455 +0000 UTC m=+7397.769128250" lastFinishedPulling="2025-11-25 10:14:36.704177392 +0000 UTC m=+7410.413208187" observedRunningTime="2025-11-25 10:14:37.882270973 +0000 UTC m=+7411.591301768" watchObservedRunningTime="2025-11-25 10:14:37.884681192 +0000 UTC m=+7411.593711987" Nov 25 10:14:56 crc kubenswrapper[4760]: E1125 10:14:56.945504 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.264647 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2"] Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.266152 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.268657 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.269157 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.278466 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2"] Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.300576 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r92z7\" (UniqueName: \"kubernetes.io/projected/4fb8826f-88db-43af-93b3-cf07c969f874-kube-api-access-r92z7\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.300664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb8826f-88db-43af-93b3-cf07c969f874-config-volume\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.301293 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fb8826f-88db-43af-93b3-cf07c969f874-secret-volume\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.403773 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r92z7\" (UniqueName: \"kubernetes.io/projected/4fb8826f-88db-43af-93b3-cf07c969f874-kube-api-access-r92z7\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.404180 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb8826f-88db-43af-93b3-cf07c969f874-config-volume\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.404344 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fb8826f-88db-43af-93b3-cf07c969f874-secret-volume\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.405060 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb8826f-88db-43af-93b3-cf07c969f874-config-volume\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.413221 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fb8826f-88db-43af-93b3-cf07c969f874-secret-volume\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.426922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r92z7\" (UniqueName: \"kubernetes.io/projected/4fb8826f-88db-43af-93b3-cf07c969f874-kube-api-access-r92z7\") pod \"collect-profiles-29401095-jh4f2\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:00 crc kubenswrapper[4760]: I1125 10:15:00.609092 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:01 crc kubenswrapper[4760]: I1125 10:15:01.674798 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2"] Nov 25 10:15:01 crc kubenswrapper[4760]: I1125 10:15:01.746232 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:15:01 crc kubenswrapper[4760]: I1125 10:15:01.746588 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:15:01 crc kubenswrapper[4760]: I1125 10:15:01.746636 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 10:15:01 crc kubenswrapper[4760]: I1125 10:15:01.747873 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:15:01 crc kubenswrapper[4760]: I1125 10:15:01.747936 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" gracePeriod=600 Nov 25 10:15:01 crc kubenswrapper[4760]: E1125 10:15:01.898574 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:15:02 crc kubenswrapper[4760]: I1125 10:15:02.145238 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" exitCode=0 Nov 25 10:15:02 crc kubenswrapper[4760]: I1125 10:15:02.145308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98"} Nov 25 10:15:02 crc kubenswrapper[4760]: I1125 10:15:02.145884 4760 scope.go:117] "RemoveContainer" containerID="97bc83681ec651ceba1b9d3f3554238617eb62f9df1e07f0a88e8966aea91621" Nov 25 10:15:02 crc kubenswrapper[4760]: I1125 10:15:02.146487 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:15:02 crc kubenswrapper[4760]: E1125 10:15:02.146728 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:15:02 crc kubenswrapper[4760]: I1125 10:15:02.150884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" event={"ID":"4fb8826f-88db-43af-93b3-cf07c969f874","Type":"ContainerStarted","Data":"37aa82fada7e81226c2d5b1072f20dfa81ebed603d05fc434187308f0aa7f0d5"} Nov 25 10:15:02 crc kubenswrapper[4760]: I1125 10:15:02.150917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" event={"ID":"4fb8826f-88db-43af-93b3-cf07c969f874","Type":"ContainerStarted","Data":"5c6ccb082cf8a9a3b8e9f32c29a7c18c30a4128cd444cb3416f17b45fcd9d24b"} Nov 25 10:15:02 crc kubenswrapper[4760]: I1125 10:15:02.184667 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" podStartSLOduration=2.184647399 podStartE2EDuration="2.184647399s" podCreationTimestamp="2025-11-25 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:15:02.18152189 +0000 UTC m=+7435.890552685" watchObservedRunningTime="2025-11-25 10:15:02.184647399 +0000 UTC m=+7435.893678204" Nov 25 10:15:03 crc kubenswrapper[4760]: I1125 10:15:03.164969 4760 generic.go:334] "Generic (PLEG): container finished" podID="4fb8826f-88db-43af-93b3-cf07c969f874" containerID="37aa82fada7e81226c2d5b1072f20dfa81ebed603d05fc434187308f0aa7f0d5" exitCode=0 Nov 25 10:15:03 crc kubenswrapper[4760]: I1125 10:15:03.165165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" event={"ID":"4fb8826f-88db-43af-93b3-cf07c969f874","Type":"ContainerDied","Data":"37aa82fada7e81226c2d5b1072f20dfa81ebed603d05fc434187308f0aa7f0d5"} Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.658467 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.716903 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb8826f-88db-43af-93b3-cf07c969f874-config-volume\") pod \"4fb8826f-88db-43af-93b3-cf07c969f874\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.717270 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r92z7\" (UniqueName: \"kubernetes.io/projected/4fb8826f-88db-43af-93b3-cf07c969f874-kube-api-access-r92z7\") pod \"4fb8826f-88db-43af-93b3-cf07c969f874\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.717627 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fb8826f-88db-43af-93b3-cf07c969f874-secret-volume\") pod \"4fb8826f-88db-43af-93b3-cf07c969f874\" (UID: \"4fb8826f-88db-43af-93b3-cf07c969f874\") " Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.718105 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fb8826f-88db-43af-93b3-cf07c969f874-config-volume" (OuterVolumeSpecName: "config-volume") pod "4fb8826f-88db-43af-93b3-cf07c969f874" (UID: "4fb8826f-88db-43af-93b3-cf07c969f874"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.718557 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb8826f-88db-43af-93b3-cf07c969f874-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.732812 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb8826f-88db-43af-93b3-cf07c969f874-kube-api-access-r92z7" (OuterVolumeSpecName: "kube-api-access-r92z7") pod "4fb8826f-88db-43af-93b3-cf07c969f874" (UID: "4fb8826f-88db-43af-93b3-cf07c969f874"). InnerVolumeSpecName "kube-api-access-r92z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.737777 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb8826f-88db-43af-93b3-cf07c969f874-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4fb8826f-88db-43af-93b3-cf07c969f874" (UID: "4fb8826f-88db-43af-93b3-cf07c969f874"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.820754 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r92z7\" (UniqueName: \"kubernetes.io/projected/4fb8826f-88db-43af-93b3-cf07c969f874-kube-api-access-r92z7\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:04 crc kubenswrapper[4760]: I1125 10:15:04.820788 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4fb8826f-88db-43af-93b3-cf07c969f874-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:05 crc kubenswrapper[4760]: I1125 10:15:05.191576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" event={"ID":"4fb8826f-88db-43af-93b3-cf07c969f874","Type":"ContainerDied","Data":"5c6ccb082cf8a9a3b8e9f32c29a7c18c30a4128cd444cb3416f17b45fcd9d24b"} Nov 25 10:15:05 crc kubenswrapper[4760]: I1125 10:15:05.191613 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c6ccb082cf8a9a3b8e9f32c29a7c18c30a4128cd444cb3416f17b45fcd9d24b" Nov 25 10:15:05 crc kubenswrapper[4760]: I1125 10:15:05.191674 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401095-jh4f2" Nov 25 10:15:05 crc kubenswrapper[4760]: I1125 10:15:05.769415 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2"] Nov 25 10:15:05 crc kubenswrapper[4760]: I1125 10:15:05.780084 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-s7vw2"] Nov 25 10:15:06 crc kubenswrapper[4760]: I1125 10:15:06.974214 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d1e8dd-a7c0-447d-82c2-c3382cd582e9" path="/var/lib/kubelet/pods/47d1e8dd-a7c0-447d-82c2-c3382cd582e9/volumes" Nov 25 10:15:14 crc kubenswrapper[4760]: I1125 10:15:14.938624 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:15:14 crc kubenswrapper[4760]: E1125 10:15:14.939454 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:15:27 crc kubenswrapper[4760]: I1125 10:15:27.938579 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:15:27 crc kubenswrapper[4760]: E1125 10:15:27.939202 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:15:33 crc kubenswrapper[4760]: I1125 10:15:33.448115 4760 generic.go:334] "Generic (PLEG): container finished" podID="041a93d0-ab8a-4cd6-9466-28e25cf986fc" containerID="6f85cfd5345326e4740f9ef7e72114cfcb1ec734cccd4adde5be48a5d6ce6224" exitCode=0 Nov 25 10:15:33 crc kubenswrapper[4760]: I1125 10:15:33.448191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" event={"ID":"041a93d0-ab8a-4cd6-9466-28e25cf986fc","Type":"ContainerDied","Data":"6f85cfd5345326e4740f9ef7e72114cfcb1ec734cccd4adde5be48a5d6ce6224"} Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.560676 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.593632 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-6vkh8"] Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.605892 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-6vkh8"] Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.733827 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/041a93d0-ab8a-4cd6-9466-28e25cf986fc-host\") pod \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.733963 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7685v\" (UniqueName: \"kubernetes.io/projected/041a93d0-ab8a-4cd6-9466-28e25cf986fc-kube-api-access-7685v\") pod \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\" (UID: \"041a93d0-ab8a-4cd6-9466-28e25cf986fc\") " Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.733970 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/041a93d0-ab8a-4cd6-9466-28e25cf986fc-host" (OuterVolumeSpecName: "host") pod "041a93d0-ab8a-4cd6-9466-28e25cf986fc" (UID: "041a93d0-ab8a-4cd6-9466-28e25cf986fc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.734561 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/041a93d0-ab8a-4cd6-9466-28e25cf986fc-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.739191 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041a93d0-ab8a-4cd6-9466-28e25cf986fc-kube-api-access-7685v" (OuterVolumeSpecName: "kube-api-access-7685v") pod "041a93d0-ab8a-4cd6-9466-28e25cf986fc" (UID: "041a93d0-ab8a-4cd6-9466-28e25cf986fc"). InnerVolumeSpecName "kube-api-access-7685v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.836552 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7685v\" (UniqueName: \"kubernetes.io/projected/041a93d0-ab8a-4cd6-9466-28e25cf986fc-kube-api-access-7685v\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:34 crc kubenswrapper[4760]: I1125 10:15:34.955551 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041a93d0-ab8a-4cd6-9466-28e25cf986fc" path="/var/lib/kubelet/pods/041a93d0-ab8a-4cd6-9466-28e25cf986fc/volumes" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.469146 4760 scope.go:117] "RemoveContainer" containerID="6f85cfd5345326e4740f9ef7e72114cfcb1ec734cccd4adde5be48a5d6ce6224" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.469238 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-6vkh8" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.758808 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-jzxmj"] Nov 25 10:15:35 crc kubenswrapper[4760]: E1125 10:15:35.759225 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041a93d0-ab8a-4cd6-9466-28e25cf986fc" containerName="container-00" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.759238 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="041a93d0-ab8a-4cd6-9466-28e25cf986fc" containerName="container-00" Nov 25 10:15:35 crc kubenswrapper[4760]: E1125 10:15:35.759269 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb8826f-88db-43af-93b3-cf07c969f874" containerName="collect-profiles" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.759275 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb8826f-88db-43af-93b3-cf07c969f874" containerName="collect-profiles" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.759474 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="041a93d0-ab8a-4cd6-9466-28e25cf986fc" containerName="container-00" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.759492 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb8826f-88db-43af-93b3-cf07c969f874" containerName="collect-profiles" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.760089 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.859405 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4633353-4af0-48f3-bf25-d17eba59b2d0-host\") pod \"crc-debug-jzxmj\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.859503 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krkkz\" (UniqueName: \"kubernetes.io/projected/e4633353-4af0-48f3-bf25-d17eba59b2d0-kube-api-access-krkkz\") pod \"crc-debug-jzxmj\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.961279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4633353-4af0-48f3-bf25-d17eba59b2d0-host\") pod \"crc-debug-jzxmj\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.961404 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krkkz\" (UniqueName: \"kubernetes.io/projected/e4633353-4af0-48f3-bf25-d17eba59b2d0-kube-api-access-krkkz\") pod \"crc-debug-jzxmj\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.961414 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4633353-4af0-48f3-bf25-d17eba59b2d0-host\") pod \"crc-debug-jzxmj\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:35 crc kubenswrapper[4760]: I1125 10:15:35.979370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krkkz\" (UniqueName: \"kubernetes.io/projected/e4633353-4af0-48f3-bf25-d17eba59b2d0-kube-api-access-krkkz\") pod \"crc-debug-jzxmj\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:36 crc kubenswrapper[4760]: I1125 10:15:36.076773 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:36 crc kubenswrapper[4760]: I1125 10:15:36.481060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" event={"ID":"e4633353-4af0-48f3-bf25-d17eba59b2d0","Type":"ContainerStarted","Data":"c828b7042ee676c0f3ba4820fb021a2c2b26f1ac366cda7d627346e68d2cfa16"} Nov 25 10:15:36 crc kubenswrapper[4760]: I1125 10:15:36.481424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" event={"ID":"e4633353-4af0-48f3-bf25-d17eba59b2d0","Type":"ContainerStarted","Data":"b1881d772d2bd936440c4f2297e6172a4cc7193a107b8609a7889f47a34b3236"} Nov 25 10:15:36 crc kubenswrapper[4760]: I1125 10:15:36.497108 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" podStartSLOduration=1.497087243 podStartE2EDuration="1.497087243s" podCreationTimestamp="2025-11-25 10:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:15:36.493737177 +0000 UTC m=+7470.202767982" watchObservedRunningTime="2025-11-25 10:15:36.497087243 +0000 UTC m=+7470.206118038" Nov 25 10:15:36 crc kubenswrapper[4760]: I1125 10:15:36.607946 4760 scope.go:117] "RemoveContainer" containerID="8b52dedb7ceb50f33366c40b4a825bd32ece4e5e78f41132ea64657e3f8e9041" Nov 25 10:15:37 crc kubenswrapper[4760]: I1125 10:15:37.497053 4760 generic.go:334] "Generic (PLEG): container finished" podID="e4633353-4af0-48f3-bf25-d17eba59b2d0" containerID="c828b7042ee676c0f3ba4820fb021a2c2b26f1ac366cda7d627346e68d2cfa16" exitCode=0 Nov 25 10:15:37 crc kubenswrapper[4760]: I1125 10:15:37.497154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" event={"ID":"e4633353-4af0-48f3-bf25-d17eba59b2d0","Type":"ContainerDied","Data":"c828b7042ee676c0f3ba4820fb021a2c2b26f1ac366cda7d627346e68d2cfa16"} Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.622909 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.810939 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krkkz\" (UniqueName: \"kubernetes.io/projected/e4633353-4af0-48f3-bf25-d17eba59b2d0-kube-api-access-krkkz\") pod \"e4633353-4af0-48f3-bf25-d17eba59b2d0\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.811028 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4633353-4af0-48f3-bf25-d17eba59b2d0-host\") pod \"e4633353-4af0-48f3-bf25-d17eba59b2d0\" (UID: \"e4633353-4af0-48f3-bf25-d17eba59b2d0\") " Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.811142 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4633353-4af0-48f3-bf25-d17eba59b2d0-host" (OuterVolumeSpecName: "host") pod "e4633353-4af0-48f3-bf25-d17eba59b2d0" (UID: "e4633353-4af0-48f3-bf25-d17eba59b2d0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.811628 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4633353-4af0-48f3-bf25-d17eba59b2d0-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.820588 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4633353-4af0-48f3-bf25-d17eba59b2d0-kube-api-access-krkkz" (OuterVolumeSpecName: "kube-api-access-krkkz") pod "e4633353-4af0-48f3-bf25-d17eba59b2d0" (UID: "e4633353-4af0-48f3-bf25-d17eba59b2d0"). InnerVolumeSpecName "kube-api-access-krkkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.913263 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krkkz\" (UniqueName: \"kubernetes.io/projected/e4633353-4af0-48f3-bf25-d17eba59b2d0-kube-api-access-krkkz\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.955717 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-jzxmj"] Nov 25 10:15:38 crc kubenswrapper[4760]: I1125 10:15:38.966468 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-jzxmj"] Nov 25 10:15:39 crc kubenswrapper[4760]: I1125 10:15:39.518280 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1881d772d2bd936440c4f2297e6172a4cc7193a107b8609a7889f47a34b3236" Nov 25 10:15:39 crc kubenswrapper[4760]: I1125 10:15:39.518320 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-jzxmj" Nov 25 10:15:39 crc kubenswrapper[4760]: I1125 10:15:39.938737 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:15:39 crc kubenswrapper[4760]: E1125 10:15:39.939311 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.128772 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-9cm5x"] Nov 25 10:15:40 crc kubenswrapper[4760]: E1125 10:15:40.129204 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4633353-4af0-48f3-bf25-d17eba59b2d0" containerName="container-00" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.129219 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4633353-4af0-48f3-bf25-d17eba59b2d0" containerName="container-00" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.129516 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4633353-4af0-48f3-bf25-d17eba59b2d0" containerName="container-00" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.130117 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.239050 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48z4\" (UniqueName: \"kubernetes.io/projected/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-kube-api-access-n48z4\") pod \"crc-debug-9cm5x\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.239468 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-host\") pod \"crc-debug-9cm5x\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.342146 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n48z4\" (UniqueName: \"kubernetes.io/projected/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-kube-api-access-n48z4\") pod \"crc-debug-9cm5x\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.342205 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-host\") pod \"crc-debug-9cm5x\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.342453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-host\") pod \"crc-debug-9cm5x\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.362432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n48z4\" (UniqueName: \"kubernetes.io/projected/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-kube-api-access-n48z4\") pod \"crc-debug-9cm5x\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.450225 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:40 crc kubenswrapper[4760]: W1125 10:15:40.511952 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c1bfd7f_2156_4d84_bcfd_4d916a75a452.slice/crio-2fce4a9bd2ea8704bb63e6cd5eb448e2e9847282af6117564e3b3ebcb82c98ff WatchSource:0}: Error finding container 2fce4a9bd2ea8704bb63e6cd5eb448e2e9847282af6117564e3b3ebcb82c98ff: Status 404 returned error can't find the container with id 2fce4a9bd2ea8704bb63e6cd5eb448e2e9847282af6117564e3b3ebcb82c98ff Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.544042 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" event={"ID":"8c1bfd7f-2156-4d84-bcfd-4d916a75a452","Type":"ContainerStarted","Data":"2fce4a9bd2ea8704bb63e6cd5eb448e2e9847282af6117564e3b3ebcb82c98ff"} Nov 25 10:15:40 crc kubenswrapper[4760]: I1125 10:15:40.955841 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4633353-4af0-48f3-bf25-d17eba59b2d0" path="/var/lib/kubelet/pods/e4633353-4af0-48f3-bf25-d17eba59b2d0/volumes" Nov 25 10:15:41 crc kubenswrapper[4760]: I1125 10:15:41.554887 4760 generic.go:334] "Generic (PLEG): container finished" podID="8c1bfd7f-2156-4d84-bcfd-4d916a75a452" containerID="b47ccc716c3d25cb866684e61fd65f5d99c2899587e73a6ee86a14a41b625b39" exitCode=0 Nov 25 10:15:41 crc kubenswrapper[4760]: I1125 10:15:41.554928 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" event={"ID":"8c1bfd7f-2156-4d84-bcfd-4d916a75a452","Type":"ContainerDied","Data":"b47ccc716c3d25cb866684e61fd65f5d99c2899587e73a6ee86a14a41b625b39"} Nov 25 10:15:41 crc kubenswrapper[4760]: I1125 10:15:41.599327 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-9cm5x"] Nov 25 10:15:41 crc kubenswrapper[4760]: I1125 10:15:41.611545 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8nbl/crc-debug-9cm5x"] Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.666606 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.706178 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n48z4\" (UniqueName: \"kubernetes.io/projected/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-kube-api-access-n48z4\") pod \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.706291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-host\") pod \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\" (UID: \"8c1bfd7f-2156-4d84-bcfd-4d916a75a452\") " Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.706414 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-host" (OuterVolumeSpecName: "host") pod "8c1bfd7f-2156-4d84-bcfd-4d916a75a452" (UID: "8c1bfd7f-2156-4d84-bcfd-4d916a75a452"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.707080 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.713526 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-kube-api-access-n48z4" (OuterVolumeSpecName: "kube-api-access-n48z4") pod "8c1bfd7f-2156-4d84-bcfd-4d916a75a452" (UID: "8c1bfd7f-2156-4d84-bcfd-4d916a75a452"). InnerVolumeSpecName "kube-api-access-n48z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.808942 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n48z4\" (UniqueName: \"kubernetes.io/projected/8c1bfd7f-2156-4d84-bcfd-4d916a75a452-kube-api-access-n48z4\") on node \"crc\" DevicePath \"\"" Nov 25 10:15:42 crc kubenswrapper[4760]: I1125 10:15:42.949995 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1bfd7f-2156-4d84-bcfd-4d916a75a452" path="/var/lib/kubelet/pods/8c1bfd7f-2156-4d84-bcfd-4d916a75a452/volumes" Nov 25 10:15:43 crc kubenswrapper[4760]: I1125 10:15:43.574625 4760 scope.go:117] "RemoveContainer" containerID="b47ccc716c3d25cb866684e61fd65f5d99c2899587e73a6ee86a14a41b625b39" Nov 25 10:15:43 crc kubenswrapper[4760]: I1125 10:15:43.574656 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/crc-debug-9cm5x" Nov 25 10:15:52 crc kubenswrapper[4760]: I1125 10:15:52.938606 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:15:52 crc kubenswrapper[4760]: E1125 10:15:52.939432 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:16:03 crc kubenswrapper[4760]: I1125 10:16:03.939843 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:16:03 crc kubenswrapper[4760]: E1125 10:16:03.940652 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:16:05 crc kubenswrapper[4760]: E1125 10:16:05.939226 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:16:15 crc kubenswrapper[4760]: I1125 10:16:15.938896 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:16:15 crc kubenswrapper[4760]: E1125 10:16:15.939614 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:16:29 crc kubenswrapper[4760]: I1125 10:16:29.784608 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_5fd9b990-91a9-4529-a951-15647544f5ec/ansibletest-ansibletest/0.log" Nov 25 10:16:29 crc kubenswrapper[4760]: I1125 10:16:29.938396 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:16:29 crc kubenswrapper[4760]: E1125 10:16:29.938946 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:16:29 crc kubenswrapper[4760]: I1125 10:16:29.972389 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d84fc8b6b-jxtfg_d99a8e14-f31b-45d8-8e74-8ace724974ad/barbican-api/0.log" Nov 25 10:16:29 crc kubenswrapper[4760]: I1125 10:16:29.975532 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d84fc8b6b-jxtfg_d99a8e14-f31b-45d8-8e74-8ace724974ad/barbican-api-log/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.143871 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6b6b98f4-9l69x_5d7c9636-175f-4d7e-b3c7-86586c9a8734/barbican-keystone-listener/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.347279 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d9875665c-r8sg4_2b1b4f65-ed06-4d6d-9e74-b27255748225/barbican-worker/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.393470 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d9875665c-r8sg4_2b1b4f65-ed06-4d6d-9e74-b27255748225/barbican-worker-log/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.596106 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh_e324f737-7225-41ec-b3c5-6cc0c2931377/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.710569 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6b6b98f4-9l69x_5d7c9636-175f-4d7e-b3c7-86586c9a8734/barbican-keystone-listener-log/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.770292 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/ceilometer-central-agent/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.892801 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/proxy-httpd/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.899829 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/ceilometer-notification-agent/0.log" Nov 25 10:16:30 crc kubenswrapper[4760]: I1125 10:16:30.921241 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/sg-core/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.100005 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb_60d03216-7d4d-433d-9e84-7b6a6b399a5f/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.124077 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb_5d87e41c-e89d-4b52-83b7-79d77bee80d9/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.335402 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0a8e435-6d04-48d6-b723-252b8358b055/cinder-api-log/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.442929 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0a8e435-6d04-48d6-b723-252b8358b055/cinder-api/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.635644 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_09dd7945-dda4-4682-b55e-44569ec2bc78/cinder-backup/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.669286 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_09dd7945-dda4-4682-b55e-44569ec2bc78/probe/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.756577 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4e64f72-cbdd-44dc-9c1f-21b88eae9288/cinder-scheduler/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.905135 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4e64f72-cbdd-44dc-9c1f-21b88eae9288/probe/0.log" Nov 25 10:16:31 crc kubenswrapper[4760]: I1125 10:16:31.977755 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f4f729ff-1806-4032-922b-2a47e4a9d7ff/cinder-volume/0.log" Nov 25 10:16:32 crc kubenswrapper[4760]: I1125 10:16:32.026375 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f4f729ff-1806-4032-922b-2a47e4a9d7ff/probe/0.log" Nov 25 10:16:32 crc kubenswrapper[4760]: I1125 10:16:32.199826 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd_ed298743-8f13-44a6-bbff-1b5702a1a0f5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:32 crc kubenswrapper[4760]: I1125 10:16:32.289568 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-824fv_bbb80fb1-9cd8-4326-9db9-88edd50fc0d4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:32 crc kubenswrapper[4760]: I1125 10:16:32.442259 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6885d49d55-9mqqw_1b305350-e74d-4e9a-8af0-14e88ddfccc0/init/0.log" Nov 25 10:16:32 crc kubenswrapper[4760]: I1125 10:16:32.610946 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6885d49d55-9mqqw_1b305350-e74d-4e9a-8af0-14e88ddfccc0/init/0.log" Nov 25 10:16:32 crc kubenswrapper[4760]: I1125 10:16:32.890385 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a3c90ae6-873c-4a00-84a0-a9a60fcc7c74/glance-httpd/0.log" Nov 25 10:16:32 crc kubenswrapper[4760]: I1125 10:16:32.986353 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a3c90ae6-873c-4a00-84a0-a9a60fcc7c74/glance-log/0.log" Nov 25 10:16:33 crc kubenswrapper[4760]: I1125 10:16:33.131807 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6885d49d55-9mqqw_1b305350-e74d-4e9a-8af0-14e88ddfccc0/dnsmasq-dns/0.log" Nov 25 10:16:33 crc kubenswrapper[4760]: I1125 10:16:33.209893 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cad7cc0f-3821-44ee-8b39-71988664ee4e/glance-httpd/0.log" Nov 25 10:16:33 crc kubenswrapper[4760]: I1125 10:16:33.233536 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cad7cc0f-3821-44ee-8b39-71988664ee4e/glance-log/0.log" Nov 25 10:16:33 crc kubenswrapper[4760]: I1125 10:16:33.506018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6655684d54-8jfvz_0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc/horizon/0.log" Nov 25 10:16:33 crc kubenswrapper[4760]: I1125 10:16:33.679564 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_aa57ea6c-4740-4010-a3d6-a0e070615d40/horizontest-tests-horizontest/0.log" Nov 25 10:16:33 crc kubenswrapper[4760]: I1125 10:16:33.775112 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j_be1883ad-ca79-4bec-89f9-9b783c5047df/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:33 crc kubenswrapper[4760]: I1125 10:16:33.899853 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ggjjn_e3e21edb-5737-49cd-bc9c-407e5f7f5445/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:34 crc kubenswrapper[4760]: I1125 10:16:34.167686 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401021-hxq6l_54e54192-6eff-4b00-a1f6-f9290cb87eca/keystone-cron/0.log" Nov 25 10:16:34 crc kubenswrapper[4760]: I1125 10:16:34.385434 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401081-7bnrg_d796b091-56b6-4f51-95f8-a4f01db5d9a6/keystone-cron/0.log" Nov 25 10:16:34 crc kubenswrapper[4760]: I1125 10:16:34.578522 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd20932f-cb28-4343-98df-425123f7c87f/kube-state-metrics/3.log" Nov 25 10:16:34 crc kubenswrapper[4760]: I1125 10:16:34.613834 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd20932f-cb28-4343-98df-425123f7c87f/kube-state-metrics/2.log" Nov 25 10:16:34 crc kubenswrapper[4760]: I1125 10:16:34.903753 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs_2d913348-cf44-4539-b090-181ea0720a33/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:35 crc kubenswrapper[4760]: I1125 10:16:35.136569 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0cc0b6e2-9204-474d-842c-c488ff0811a4/manila-api-log/0.log" Nov 25 10:16:35 crc kubenswrapper[4760]: I1125 10:16:35.145105 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0cc0b6e2-9204-474d-842c-c488ff0811a4/manila-api/0.log" Nov 25 10:16:35 crc kubenswrapper[4760]: I1125 10:16:35.165913 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6655684d54-8jfvz_0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc/horizon-log/0.log" Nov 25 10:16:35 crc kubenswrapper[4760]: I1125 10:16:35.469597 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_f5b0fe2e-7460-4e1d-85f9-5cccfba89817/probe/0.log" Nov 25 10:16:35 crc kubenswrapper[4760]: I1125 10:16:35.600115 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_4424df0c-a7e7-4880-aeb3-e8beaaa57b80/manila-share/0.log" Nov 25 10:16:35 crc kubenswrapper[4760]: I1125 10:16:35.643878 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_f5b0fe2e-7460-4e1d-85f9-5cccfba89817/manila-scheduler/0.log" Nov 25 10:16:35 crc kubenswrapper[4760]: I1125 10:16:35.765941 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_4424df0c-a7e7-4880-aeb3-e8beaaa57b80/probe/0.log" Nov 25 10:16:36 crc kubenswrapper[4760]: I1125 10:16:36.471750 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827_01b4af7c-f553-48d7-9166-856497bbe664/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:37 crc kubenswrapper[4760]: I1125 10:16:37.226583 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-564c475cd5-6wg66_9937626b-b050-469f-9e47-78785cfb5c15/neutron-httpd/0.log" Nov 25 10:16:37 crc kubenswrapper[4760]: I1125 10:16:37.900774 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69cbccbbcc-v8kx4_66326df4-af7d-474c-b63f-eee554099e1c/keystone-api/0.log" Nov 25 10:16:38 crc kubenswrapper[4760]: I1125 10:16:38.374439 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-564c475cd5-6wg66_9937626b-b050-469f-9e47-78785cfb5c15/neutron-api/0.log" Nov 25 10:16:38 crc kubenswrapper[4760]: I1125 10:16:38.933724 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8e3cadcf-b35a-4f88-9f0a-684f735164a0/nova-cell0-conductor-conductor/0.log" Nov 25 10:16:38 crc kubenswrapper[4760]: I1125 10:16:38.962238 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_db562c11-b116-4a44-9506-ef67f5211979/nova-cell1-conductor-conductor/0.log" Nov 25 10:16:39 crc kubenswrapper[4760]: I1125 10:16:39.361320 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_012fc757-399f-4a14-9ef8-332e3c34f53a/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 10:16:39 crc kubenswrapper[4760]: I1125 10:16:39.653482 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp_515be97b-ca6d-43a0-b8a1-471a782240bc/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:39 crc kubenswrapper[4760]: I1125 10:16:39.910881 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf7e8b89-ff82-471f-9255-d3268551c726/nova-metadata-log/0.log" Nov 25 10:16:41 crc kubenswrapper[4760]: I1125 10:16:41.135659 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b4921858-b22b-474b-b8fb-6ccbd97bffac/nova-scheduler-scheduler/0.log" Nov 25 10:16:41 crc kubenswrapper[4760]: I1125 10:16:41.575493 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_17455e1c-2662-421d-ac93-ce773e1fd50a/mysql-bootstrap/0.log" Nov 25 10:16:41 crc kubenswrapper[4760]: I1125 10:16:41.785107 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_32c2adbb-f391-45e9-b20b-db6f61f927eb/nova-api-log/0.log" Nov 25 10:16:41 crc kubenswrapper[4760]: I1125 10:16:41.831890 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_17455e1c-2662-421d-ac93-ce773e1fd50a/mysql-bootstrap/0.log" Nov 25 10:16:42 crc kubenswrapper[4760]: I1125 10:16:42.176101 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_17455e1c-2662-421d-ac93-ce773e1fd50a/galera/0.log" Nov 25 10:16:42 crc kubenswrapper[4760]: I1125 10:16:42.381923 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_de9d3301-bdad-46bf-b7c2-4467cfd590dd/mysql-bootstrap/0.log" Nov 25 10:16:42 crc kubenswrapper[4760]: I1125 10:16:42.442581 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_de9d3301-bdad-46bf-b7c2-4467cfd590dd/mysql-bootstrap/0.log" Nov 25 10:16:42 crc kubenswrapper[4760]: I1125 10:16:42.498239 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_32c2adbb-f391-45e9-b20b-db6f61f927eb/nova-api-api/0.log" Nov 25 10:16:42 crc kubenswrapper[4760]: I1125 10:16:42.662064 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9df819bd-2ca5-4dd0-9409-e8d6e9a80b93/openstackclient/0.log" Nov 25 10:16:42 crc kubenswrapper[4760]: I1125 10:16:42.724684 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_de9d3301-bdad-46bf-b7c2-4467cfd590dd/galera/0.log" Nov 25 10:16:42 crc kubenswrapper[4760]: I1125 10:16:42.934013 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fgpnw_68c768c5-3e1e-41a8-af21-c886ea5959a3/openstack-network-exporter/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.106417 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovsdb-server-init/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.277990 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovsdb-server-init/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.301515 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovs-vswitchd/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.343843 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovsdb-server/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.531738 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wtp5g_7b050dee-2005-4a2b-8550-6f5d055a86b6/ovn-controller/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.728862 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kjm4v_eaf0aab3-fbd3-4389-ab45-8bd1c834f48f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.796970 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22e32299-69a7-4572-8ff1-1d2d409d5137/openstack-network-exporter/0.log" Nov 25 10:16:43 crc kubenswrapper[4760]: I1125 10:16:43.964978 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22e32299-69a7-4572-8ff1-1d2d409d5137/ovn-northd/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.018013 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_281d5fd5-dd87-4463-be57-4fd409cf4009/openstack-network-exporter/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.116194 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf7e8b89-ff82-471f-9255-d3268551c726/nova-metadata-metadata/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.221054 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_281d5fd5-dd87-4463-be57-4fd409cf4009/ovsdbserver-nb/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.242971 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c1645e51-365a-4195-bb42-5641959bf77f/openstack-network-exporter/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.379388 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c1645e51-365a-4195-bb42-5641959bf77f/ovsdbserver-sb/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.690282 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54c05cca-ddf1-4567-b30b-f770bd6b6704/setup-container/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.898164 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54c05cca-ddf1-4567-b30b-f770bd6b6704/setup-container/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.921714 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54c05cca-ddf1-4567-b30b-f770bd6b6704/rabbitmq/0.log" Nov 25 10:16:44 crc kubenswrapper[4760]: I1125 10:16:44.938187 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:16:44 crc kubenswrapper[4760]: E1125 10:16:44.938488 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.062067 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-598d8454cd-s4vpx_5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4/placement-api/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.162030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ac940436-7641-4872-8ab1-f6e0aca87e80/setup-container/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.197202 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-598d8454cd-s4vpx_5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4/placement-log/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.333515 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ac940436-7641-4872-8ab1-f6e0aca87e80/setup-container/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.420934 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp_375f35df-5fe0-4456-9d10-649e72a962a7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.470672 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ac940436-7641-4872-8ab1-f6e0aca87e80/rabbitmq/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.621091 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5_5606daaf-d5b9-4ed2-a9aa-5e715141d4e4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.727510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-n86sh_907a9527-c37d-4e36-9a7e-35066c230b6d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:45 crc kubenswrapper[4760]: I1125 10:16:45.880779 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jsv2p_6f68ee3f-7d13-433a-bc6b-504e98ff7b1d/ssh-known-hosts-edpm-deployment/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.066875 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_a546f694-04d6-4212-b53a-142420418b97/tempest-tests-tempest-tests-runner/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.170690 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_7e76e3b1-69e6-4498-b2f9-a52fdfe1650e/tempest-tests-tempest-tests-runner/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.335274 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_3d62b634-2cf7-42e7-b5d4-3791056b146a/test-operator-logs-container/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.435775 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_9b073dce-d4e1-4018-bfe6-f0a54597f116/test-operator-logs-container/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.589095 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9d79e9ee-084d-41e7-9513-aaea8863e85d/test-operator-logs-container/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.640139 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_c1a8f236-1676-4e0e-9395-8500fda5eba2/test-operator-logs-container/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.851696 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_5a899175-c606-4361-8300-3c2ed82d823c/tobiko-tests-tobiko/0.log" Nov 25 10:16:46 crc kubenswrapper[4760]: I1125 10:16:46.968879 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_8c968840-fcc2-4c11-baed-7477dfe970d2/tobiko-tests-tobiko/0.log" Nov 25 10:16:47 crc kubenswrapper[4760]: I1125 10:16:47.246655 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr_fd5f7e13-b05e-4843-930f-62a3bf6e7ddc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:16:57 crc kubenswrapper[4760]: I1125 10:16:57.939119 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:16:57 crc kubenswrapper[4760]: E1125 10:16:57.939862 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:17:01 crc kubenswrapper[4760]: I1125 10:17:01.384766 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f1b32df7-1040-4d21-89cd-d5f772bd4014/memcached/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.115160 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-hlbbf_97e97ce2-b50b-478e-acb2-cbdd5232d67c/kube-rbac-proxy/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.181992 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-hlbbf_97e97ce2-b50b-478e-acb2-cbdd5232d67c/manager/2.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.265705 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-hlbbf_97e97ce2-b50b-478e-acb2-cbdd5232d67c/manager/1.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.307484 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/util/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.473798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/pull/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.526399 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/pull/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.560893 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/util/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.688900 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/util/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.695937 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/pull/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.701568 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/extract/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.903746 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k4dk2_03a9ee81-2733-444d-8edc-ddb1303b5686/manager/1.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.924320 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k4dk2_03a9ee81-2733-444d-8edc-ddb1303b5686/kube-rbac-proxy/0.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.927407 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k4dk2_03a9ee81-2733-444d-8edc-ddb1303b5686/manager/2.log" Nov 25 10:17:12 crc kubenswrapper[4760]: I1125 10:17:12.941879 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:17:12 crc kubenswrapper[4760]: E1125 10:17:12.945460 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.082973 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-xghfv_f531ae0e-78ad-4d2c-951f-0d1f7d1c8129/kube-rbac-proxy/0.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.122050 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-xghfv_f531ae0e-78ad-4d2c-951f-0d1f7d1c8129/manager/1.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.131594 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-xghfv_f531ae0e-78ad-4d2c-951f-0d1f7d1c8129/manager/2.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.278018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-6cjlz_25f372bf-e250-492b-abb9-680b1efdbdec/kube-rbac-proxy/0.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.295383 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-6cjlz_25f372bf-e250-492b-abb9-680b1efdbdec/manager/2.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.319143 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-6cjlz_25f372bf-e250-492b-abb9-680b1efdbdec/manager/1.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.455389 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l24ns_b4325bd6-c276-4fbc-bc67-cf5a026c3537/kube-rbac-proxy/0.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.495392 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l24ns_b4325bd6-c276-4fbc-bc67-cf5a026c3537/manager/2.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.515449 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l24ns_b4325bd6-c276-4fbc-bc67-cf5a026c3537/manager/1.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.630393 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-l28cr_890067e5-2be8-4699-8d90-f2771ef453e5/kube-rbac-proxy/0.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.657835 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-l28cr_890067e5-2be8-4699-8d90-f2771ef453e5/manager/2.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.705033 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-l28cr_890067e5-2be8-4699-8d90-f2771ef453e5/manager/1.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.814877 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-njfjf_33faed21-8b19-4064-a6e2-5064ce8cbab2/kube-rbac-proxy/0.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.837111 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-njfjf_33faed21-8b19-4064-a6e2-5064ce8cbab2/manager/2.log" Nov 25 10:17:13 crc kubenswrapper[4760]: I1125 10:17:13.876325 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-njfjf_33faed21-8b19-4064-a6e2-5064ce8cbab2/manager/1.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.013609 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-x7r44_6dde35ac-ff01-4e46-9eae-234e6abc37dc/manager/2.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.023119 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-x7r44_6dde35ac-ff01-4e46-9eae-234e6abc37dc/kube-rbac-proxy/0.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.043860 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-x7r44_6dde35ac-ff01-4e46-9eae-234e6abc37dc/manager/1.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.192612 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-kw54v_1d556614-e3c1-4834-919a-0c6f5f5cc4de/kube-rbac-proxy/0.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.227553 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-kw54v_1d556614-e3c1-4834-919a-0c6f5f5cc4de/manager/3.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.264713 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-kw54v_1d556614-e3c1-4834-919a-0c6f5f5cc4de/manager/2.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.431761 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-s4q64_f0f31412-34be-4b9d-8df1-b53d23abb1f6/kube-rbac-proxy/0.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.487775 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-s4q64_f0f31412-34be-4b9d-8df1-b53d23abb1f6/manager/2.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.499842 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-s4q64_f0f31412-34be-4b9d-8df1-b53d23abb1f6/manager/1.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.605736 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-54bpm_002e6b13-60c5-484c-8116-b4d5241ed678/kube-rbac-proxy/0.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.681198 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-54bpm_002e6b13-60c5-484c-8116-b4d5241ed678/manager/3.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.701950 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-54bpm_002e6b13-60c5-484c-8116-b4d5241ed678/manager/2.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.839159 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l7cv5_9291524e-d650-4366-b795-162d53bf2815/kube-rbac-proxy/0.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.864412 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l7cv5_9291524e-d650-4366-b795-162d53bf2815/manager/2.log" Nov 25 10:17:14 crc kubenswrapper[4760]: I1125 10:17:14.904579 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l7cv5_9291524e-d650-4366-b795-162d53bf2815/manager/1.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.045164 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-cxjcf_4e773e83-c06c-47e9-8a34-ef72472e3ae8/kube-rbac-proxy/0.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.068142 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-cxjcf_4e773e83-c06c-47e9-8a34-ef72472e3ae8/manager/3.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.114514 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-cxjcf_4e773e83-c06c-47e9-8a34-ef72472e3ae8/manager/2.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.220778 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-j5fsj_23471a89-c4fb-4e45-b7bb-2664e4ea99f3/kube-rbac-proxy/0.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.260073 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-j5fsj_23471a89-c4fb-4e45-b7bb-2664e4ea99f3/manager/3.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.307880 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-j5fsj_23471a89-c4fb-4e45-b7bb-2664e4ea99f3/manager/2.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.458416 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-c8gdx_59482a15-4638-4508-b60c-1c60c8df6d09/kube-rbac-proxy/0.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.458579 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-c8gdx_59482a15-4638-4508-b60c-1c60c8df6d09/manager/1.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.611232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-c8gdx_59482a15-4638-4508-b60c-1c60c8df6d09/manager/0.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.633503 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-wmmn4_c43ab37e-375d-4000-8313-9ea135250641/manager/2.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.662957 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-wmmn4_c43ab37e-375d-4000-8313-9ea135250641/manager/3.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.776240 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7759656c4c-n49xc_fe16fe4f-1740-4d43-a0d2-0d1d649c853c/operator/1.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.870180 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7759656c4c-n49xc_fe16fe4f-1740-4d43-a0d2-0d1d649c853c/operator/0.log" Nov 25 10:17:15 crc kubenswrapper[4760]: I1125 10:17:15.955364 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-w94z5_7e50fb1c-ead6-4358-a11b-66963b307f3a/registry-server/0.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.018843 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wvv98_65361481-df4d-4010-a478-91fd2c50d9e6/kube-rbac-proxy/0.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.062567 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wvv98_65361481-df4d-4010-a478-91fd2c50d9e6/manager/2.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.079713 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wvv98_65361481-df4d-4010-a478-91fd2c50d9e6/manager/1.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.164344 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-w4gcn_6d9d0ad6-0976-4f14-81fb-f286f6768256/kube-rbac-proxy/0.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.214353 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-w4gcn_6d9d0ad6-0976-4f14-81fb-f286f6768256/manager/2.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.285199 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-w4gcn_6d9d0ad6-0976-4f14-81fb-f286f6768256/manager/1.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.353155 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5crqc_a9a9b42e-4d3b-495e-804e-af02af05581d/operator/3.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.403762 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5crqc_a9a9b42e-4d3b-495e-804e-af02af05581d/operator/2.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.481901 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pmw6n_8aea8bb6-720b-412a-acfc-f62366da5de5/kube-rbac-proxy/0.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.483502 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pmw6n_8aea8bb6-720b-412a-acfc-f62366da5de5/manager/3.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.522265 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pmw6n_8aea8bb6-720b-412a-acfc-f62366da5de5/manager/2.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.661512 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-plxrr_cef58941-ae6b-4624-af41-65ab598838eb/kube-rbac-proxy/0.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.664215 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-plxrr_cef58941-ae6b-4624-af41-65ab598838eb/manager/3.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.694901 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-plxrr_cef58941-ae6b-4624-af41-65ab598838eb/manager/2.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.836816 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8566bc9698-5hw7j_042ed3e8-ea28-44f7-9859-2d0a1d5c3e17/kube-rbac-proxy/0.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.869739 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8566bc9698-5hw7j_042ed3e8-ea28-44f7-9859-2d0a1d5c3e17/manager/1.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.908036 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8566bc9698-5hw7j_042ed3e8-ea28-44f7-9859-2d0a1d5c3e17/manager/0.log" Nov 25 10:17:16 crc kubenswrapper[4760]: I1125 10:17:16.983400 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-cr5ch_0f496ee1-ca51-427f-a51d-4fc214c7f50a/kube-rbac-proxy/0.log" Nov 25 10:17:17 crc kubenswrapper[4760]: I1125 10:17:17.116793 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-cr5ch_0f496ee1-ca51-427f-a51d-4fc214c7f50a/manager/2.log" Nov 25 10:17:17 crc kubenswrapper[4760]: I1125 10:17:17.124028 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-cr5ch_0f496ee1-ca51-427f-a51d-4fc214c7f50a/manager/1.log" Nov 25 10:17:25 crc kubenswrapper[4760]: I1125 10:17:25.939472 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:17:25 crc kubenswrapper[4760]: E1125 10:17:25.940223 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:17:30 crc kubenswrapper[4760]: E1125 10:17:30.939307 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:17:33 crc kubenswrapper[4760]: I1125 10:17:33.055413 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pf8bv_3acc0e9c-36be-4834-8450-d68aec396f24/control-plane-machine-set-operator/0.log" Nov 25 10:17:33 crc kubenswrapper[4760]: I1125 10:17:33.184911 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6w6bs_1ffafdad-e326-4d95-8733-e5b5b2197ad9/kube-rbac-proxy/0.log" Nov 25 10:17:33 crc kubenswrapper[4760]: I1125 10:17:33.243874 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6w6bs_1ffafdad-e326-4d95-8733-e5b5b2197ad9/machine-api-operator/0.log" Nov 25 10:17:36 crc kubenswrapper[4760]: I1125 10:17:36.951921 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:17:36 crc kubenswrapper[4760]: E1125 10:17:36.952736 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.524211 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkvqc"] Nov 25 10:17:41 crc kubenswrapper[4760]: E1125 10:17:41.525075 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1bfd7f-2156-4d84-bcfd-4d916a75a452" containerName="container-00" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.525088 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1bfd7f-2156-4d84-bcfd-4d916a75a452" containerName="container-00" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.525300 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1bfd7f-2156-4d84-bcfd-4d916a75a452" containerName="container-00" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.526700 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.634472 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkvqc"] Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.640337 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdc2\" (UniqueName: \"kubernetes.io/projected/5d4b1498-7160-4f2f-9973-12383906a016-kube-api-access-qgdc2\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.640445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-catalog-content\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.640522 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-utilities\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.745571 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-catalog-content\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.745669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-utilities\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.745745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdc2\" (UniqueName: \"kubernetes.io/projected/5d4b1498-7160-4f2f-9973-12383906a016-kube-api-access-qgdc2\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.746227 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-catalog-content\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.746296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-utilities\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.785575 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdc2\" (UniqueName: \"kubernetes.io/projected/5d4b1498-7160-4f2f-9973-12383906a016-kube-api-access-qgdc2\") pod \"community-operators-tkvqc\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:41 crc kubenswrapper[4760]: I1125 10:17:41.856501 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.142443 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dkf5d"] Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.148564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.156148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkf5d"] Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.257325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-catalog-content\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.257458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5f4\" (UniqueName: \"kubernetes.io/projected/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-kube-api-access-6s5f4\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.257513 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-utilities\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.358950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s5f4\" (UniqueName: \"kubernetes.io/projected/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-kube-api-access-6s5f4\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.359054 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-utilities\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.359130 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-catalog-content\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.359708 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-utilities\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.359745 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-catalog-content\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.380004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s5f4\" (UniqueName: \"kubernetes.io/projected/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-kube-api-access-6s5f4\") pod \"redhat-marketplace-dkf5d\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.456871 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkvqc"] Nov 25 10:17:42 crc kubenswrapper[4760]: W1125 10:17:42.458384 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d4b1498_7160_4f2f_9973_12383906a016.slice/crio-cbf5ad908ac48ab06e667c48da37ed9d25358e9f5442a6a0dde954dccfbaaea7 WatchSource:0}: Error finding container cbf5ad908ac48ab06e667c48da37ed9d25358e9f5442a6a0dde954dccfbaaea7: Status 404 returned error can't find the container with id cbf5ad908ac48ab06e667c48da37ed9d25358e9f5442a6a0dde954dccfbaaea7 Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.478274 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:17:42 crc kubenswrapper[4760]: I1125 10:17:42.706854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvqc" event={"ID":"5d4b1498-7160-4f2f-9973-12383906a016","Type":"ContainerStarted","Data":"cbf5ad908ac48ab06e667c48da37ed9d25358e9f5442a6a0dde954dccfbaaea7"} Nov 25 10:17:43 crc kubenswrapper[4760]: W1125 10:17:43.001314 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e69e4bd_a3b5_4db5_9444_bc26acbf1337.slice/crio-a4f5b8c36ad7dfc6d7fc96f9d2ce40e0db7bb5274f427004aff7d184a7e2d724 WatchSource:0}: Error finding container a4f5b8c36ad7dfc6d7fc96f9d2ce40e0db7bb5274f427004aff7d184a7e2d724: Status 404 returned error can't find the container with id a4f5b8c36ad7dfc6d7fc96f9d2ce40e0db7bb5274f427004aff7d184a7e2d724 Nov 25 10:17:43 crc kubenswrapper[4760]: I1125 10:17:43.010041 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkf5d"] Nov 25 10:17:43 crc kubenswrapper[4760]: I1125 10:17:43.719224 4760 generic.go:334] "Generic (PLEG): container finished" podID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerID="2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709" exitCode=0 Nov 25 10:17:43 crc kubenswrapper[4760]: I1125 10:17:43.719454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkf5d" event={"ID":"1e69e4bd-a3b5-4db5-9444-bc26acbf1337","Type":"ContainerDied","Data":"2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709"} Nov 25 10:17:43 crc kubenswrapper[4760]: I1125 10:17:43.719684 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkf5d" event={"ID":"1e69e4bd-a3b5-4db5-9444-bc26acbf1337","Type":"ContainerStarted","Data":"a4f5b8c36ad7dfc6d7fc96f9d2ce40e0db7bb5274f427004aff7d184a7e2d724"} Nov 25 10:17:43 crc kubenswrapper[4760]: I1125 10:17:43.722576 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d4b1498-7160-4f2f-9973-12383906a016" containerID="8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df" exitCode=0 Nov 25 10:17:43 crc kubenswrapper[4760]: I1125 10:17:43.722621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvqc" event={"ID":"5d4b1498-7160-4f2f-9973-12383906a016","Type":"ContainerDied","Data":"8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df"} Nov 25 10:17:45 crc kubenswrapper[4760]: I1125 10:17:45.848457 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-86mq8_a6f5c6ad-5f4b-442a-9041-7f053349a0e7/cert-manager-controller/1.log" Nov 25 10:17:45 crc kubenswrapper[4760]: I1125 10:17:45.872894 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-86mq8_a6f5c6ad-5f4b-442a-9041-7f053349a0e7/cert-manager-controller/0.log" Nov 25 10:17:46 crc kubenswrapper[4760]: I1125 10:17:46.070414 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-m6mjj_7498b2f4-5621-4e4d-8d34-d8fc09271dcf/cert-manager-cainjector/2.log" Nov 25 10:17:46 crc kubenswrapper[4760]: I1125 10:17:46.073141 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-m6mjj_7498b2f4-5621-4e4d-8d34-d8fc09271dcf/cert-manager-cainjector/1.log" Nov 25 10:17:46 crc kubenswrapper[4760]: I1125 10:17:46.252849 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-7849w_10171911-dbe6-4b07-a58e-07713d8112c2/cert-manager-webhook/0.log" Nov 25 10:17:46 crc kubenswrapper[4760]: I1125 10:17:46.752862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvqc" event={"ID":"5d4b1498-7160-4f2f-9973-12383906a016","Type":"ContainerStarted","Data":"de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0"} Nov 25 10:17:46 crc kubenswrapper[4760]: I1125 10:17:46.755657 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkf5d" event={"ID":"1e69e4bd-a3b5-4db5-9444-bc26acbf1337","Type":"ContainerStarted","Data":"051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c"} Nov 25 10:17:47 crc kubenswrapper[4760]: I1125 10:17:47.938807 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:17:47 crc kubenswrapper[4760]: E1125 10:17:47.939394 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:17:48 crc kubenswrapper[4760]: I1125 10:17:48.774398 4760 generic.go:334] "Generic (PLEG): container finished" podID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerID="051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c" exitCode=0 Nov 25 10:17:48 crc kubenswrapper[4760]: I1125 10:17:48.774497 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkf5d" event={"ID":"1e69e4bd-a3b5-4db5-9444-bc26acbf1337","Type":"ContainerDied","Data":"051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c"} Nov 25 10:17:52 crc kubenswrapper[4760]: I1125 10:17:52.809945 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkf5d" event={"ID":"1e69e4bd-a3b5-4db5-9444-bc26acbf1337","Type":"ContainerStarted","Data":"e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893"} Nov 25 10:17:52 crc kubenswrapper[4760]: I1125 10:17:52.832487 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dkf5d" podStartSLOduration=2.297034671 podStartE2EDuration="10.832470575s" podCreationTimestamp="2025-11-25 10:17:42 +0000 UTC" firstStartedPulling="2025-11-25 10:17:43.721318641 +0000 UTC m=+7597.430349426" lastFinishedPulling="2025-11-25 10:17:52.256754525 +0000 UTC m=+7605.965785330" observedRunningTime="2025-11-25 10:17:52.828726008 +0000 UTC m=+7606.537756803" watchObservedRunningTime="2025-11-25 10:17:52.832470575 +0000 UTC m=+7606.541501360" Nov 25 10:17:53 crc kubenswrapper[4760]: I1125 10:17:53.822686 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d4b1498-7160-4f2f-9973-12383906a016" containerID="de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0" exitCode=0 Nov 25 10:17:53 crc kubenswrapper[4760]: I1125 10:17:53.822814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvqc" event={"ID":"5d4b1498-7160-4f2f-9973-12383906a016","Type":"ContainerDied","Data":"de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0"} Nov 25 10:17:55 crc kubenswrapper[4760]: I1125 10:17:55.843631 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvqc" event={"ID":"5d4b1498-7160-4f2f-9973-12383906a016","Type":"ContainerStarted","Data":"a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce"} Nov 25 10:17:55 crc kubenswrapper[4760]: I1125 10:17:55.865534 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkvqc" podStartSLOduration=3.997074285 podStartE2EDuration="14.86551308s" podCreationTimestamp="2025-11-25 10:17:41 +0000 UTC" firstStartedPulling="2025-11-25 10:17:43.724747388 +0000 UTC m=+7597.433778183" lastFinishedPulling="2025-11-25 10:17:54.593186183 +0000 UTC m=+7608.302216978" observedRunningTime="2025-11-25 10:17:55.859120577 +0000 UTC m=+7609.568151382" watchObservedRunningTime="2025-11-25 10:17:55.86551308 +0000 UTC m=+7609.574543875" Nov 25 10:17:58 crc kubenswrapper[4760]: I1125 10:17:58.586143 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-cj4rl_9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b/nmstate-console-plugin/0.log" Nov 25 10:17:58 crc kubenswrapper[4760]: I1125 10:17:58.741588 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ld6xj_adb17860-3ba6-4771-88db-d63cebf97628/nmstate-handler/0.log" Nov 25 10:17:58 crc kubenswrapper[4760]: I1125 10:17:58.764357 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-c27qr_a7203aa8-a498-4242-9c79-3bcfb384707e/kube-rbac-proxy/0.log" Nov 25 10:17:58 crc kubenswrapper[4760]: I1125 10:17:58.773761 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-c27qr_a7203aa8-a498-4242-9c79-3bcfb384707e/nmstate-metrics/0.log" Nov 25 10:17:58 crc kubenswrapper[4760]: I1125 10:17:58.956197 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-cjvcc_08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff/nmstate-operator/0.log" Nov 25 10:17:58 crc kubenswrapper[4760]: I1125 10:17:58.992151 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-p7b9n_133b40ac-61d0-4821-813d-a3f722f95293/nmstate-webhook/0.log" Nov 25 10:18:01 crc kubenswrapper[4760]: I1125 10:18:01.856679 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:18:01 crc kubenswrapper[4760]: I1125 10:18:01.857003 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:18:01 crc kubenswrapper[4760]: I1125 10:18:01.910051 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:18:01 crc kubenswrapper[4760]: I1125 10:18:01.954128 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:18:02 crc kubenswrapper[4760]: I1125 10:18:02.151438 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkvqc"] Nov 25 10:18:02 crc kubenswrapper[4760]: I1125 10:18:02.478632 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:18:02 crc kubenswrapper[4760]: I1125 10:18:02.478681 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:18:02 crc kubenswrapper[4760]: I1125 10:18:02.538178 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:18:02 crc kubenswrapper[4760]: I1125 10:18:02.938905 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:18:02 crc kubenswrapper[4760]: E1125 10:18:02.939185 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:18:02 crc kubenswrapper[4760]: I1125 10:18:02.963199 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:18:03 crc kubenswrapper[4760]: I1125 10:18:03.920981 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkvqc" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="registry-server" containerID="cri-o://a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce" gracePeriod=2 Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.563831 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkf5d"] Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.636084 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.718081 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgdc2\" (UniqueName: \"kubernetes.io/projected/5d4b1498-7160-4f2f-9973-12383906a016-kube-api-access-qgdc2\") pod \"5d4b1498-7160-4f2f-9973-12383906a016\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.718194 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-utilities\") pod \"5d4b1498-7160-4f2f-9973-12383906a016\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.718261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-catalog-content\") pod \"5d4b1498-7160-4f2f-9973-12383906a016\" (UID: \"5d4b1498-7160-4f2f-9973-12383906a016\") " Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.721755 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-utilities" (OuterVolumeSpecName: "utilities") pod "5d4b1498-7160-4f2f-9973-12383906a016" (UID: "5d4b1498-7160-4f2f-9973-12383906a016"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.743403 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4b1498-7160-4f2f-9973-12383906a016-kube-api-access-qgdc2" (OuterVolumeSpecName: "kube-api-access-qgdc2") pod "5d4b1498-7160-4f2f-9973-12383906a016" (UID: "5d4b1498-7160-4f2f-9973-12383906a016"). InnerVolumeSpecName "kube-api-access-qgdc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.773583 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d4b1498-7160-4f2f-9973-12383906a016" (UID: "5d4b1498-7160-4f2f-9973-12383906a016"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.820155 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.820206 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgdc2\" (UniqueName: \"kubernetes.io/projected/5d4b1498-7160-4f2f-9973-12383906a016-kube-api-access-qgdc2\") on node \"crc\" DevicePath \"\"" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.820217 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d4b1498-7160-4f2f-9973-12383906a016-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.930949 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d4b1498-7160-4f2f-9973-12383906a016" containerID="a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce" exitCode=0 Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.931162 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dkf5d" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="registry-server" containerID="cri-o://e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893" gracePeriod=2 Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.931294 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkvqc" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.934324 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvqc" event={"ID":"5d4b1498-7160-4f2f-9973-12383906a016","Type":"ContainerDied","Data":"a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce"} Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.934393 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkvqc" event={"ID":"5d4b1498-7160-4f2f-9973-12383906a016","Type":"ContainerDied","Data":"cbf5ad908ac48ab06e667c48da37ed9d25358e9f5442a6a0dde954dccfbaaea7"} Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.934456 4760 scope.go:117] "RemoveContainer" containerID="a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.958283 4760 scope.go:117] "RemoveContainer" containerID="de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.980137 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkvqc"] Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.989185 4760 scope.go:117] "RemoveContainer" containerID="8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df" Nov 25 10:18:04 crc kubenswrapper[4760]: I1125 10:18:04.997968 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkvqc"] Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.041633 4760 scope.go:117] "RemoveContainer" containerID="a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce" Nov 25 10:18:05 crc kubenswrapper[4760]: E1125 10:18:05.042438 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce\": container with ID starting with a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce not found: ID does not exist" containerID="a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.042486 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce"} err="failed to get container status \"a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce\": rpc error: code = NotFound desc = could not find container \"a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce\": container with ID starting with a1c73cf811e0f8624fdf835c82b43bfaa3347a56af4b3015e96dd7f778ea3dce not found: ID does not exist" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.042511 4760 scope.go:117] "RemoveContainer" containerID="de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0" Nov 25 10:18:05 crc kubenswrapper[4760]: E1125 10:18:05.044615 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0\": container with ID starting with de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0 not found: ID does not exist" containerID="de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.044645 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0"} err="failed to get container status \"de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0\": rpc error: code = NotFound desc = could not find container \"de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0\": container with ID starting with de995fb8b48d9f0dea13cd38cb37370bb472529afcc5e19f18856d162dbc82a0 not found: ID does not exist" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.044663 4760 scope.go:117] "RemoveContainer" containerID="8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df" Nov 25 10:18:05 crc kubenswrapper[4760]: E1125 10:18:05.045037 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df\": container with ID starting with 8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df not found: ID does not exist" containerID="8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.045060 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df"} err="failed to get container status \"8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df\": rpc error: code = NotFound desc = could not find container \"8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df\": container with ID starting with 8582ac7abea34ace2239629c914c344b56a7a389fe314a3fff007eb4ff5fa3df not found: ID does not exist" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.926085 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.945415 4760 generic.go:334] "Generic (PLEG): container finished" podID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerID="e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893" exitCode=0 Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.945530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkf5d" event={"ID":"1e69e4bd-a3b5-4db5-9444-bc26acbf1337","Type":"ContainerDied","Data":"e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893"} Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.945557 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkf5d" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.945565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkf5d" event={"ID":"1e69e4bd-a3b5-4db5-9444-bc26acbf1337","Type":"ContainerDied","Data":"a4f5b8c36ad7dfc6d7fc96f9d2ce40e0db7bb5274f427004aff7d184a7e2d724"} Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.945616 4760 scope.go:117] "RemoveContainer" containerID="e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893" Nov 25 10:18:05 crc kubenswrapper[4760]: I1125 10:18:05.975598 4760 scope.go:117] "RemoveContainer" containerID="051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.026221 4760 scope.go:117] "RemoveContainer" containerID="2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.042977 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-utilities\") pod \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.043086 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s5f4\" (UniqueName: \"kubernetes.io/projected/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-kube-api-access-6s5f4\") pod \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.043280 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-catalog-content\") pod \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\" (UID: \"1e69e4bd-a3b5-4db5-9444-bc26acbf1337\") " Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.045586 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-utilities" (OuterVolumeSpecName: "utilities") pod "1e69e4bd-a3b5-4db5-9444-bc26acbf1337" (UID: "1e69e4bd-a3b5-4db5-9444-bc26acbf1337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.059096 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-kube-api-access-6s5f4" (OuterVolumeSpecName: "kube-api-access-6s5f4") pod "1e69e4bd-a3b5-4db5-9444-bc26acbf1337" (UID: "1e69e4bd-a3b5-4db5-9444-bc26acbf1337"). InnerVolumeSpecName "kube-api-access-6s5f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.062111 4760 scope.go:117] "RemoveContainer" containerID="e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893" Nov 25 10:18:06 crc kubenswrapper[4760]: E1125 10:18:06.063115 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893\": container with ID starting with e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893 not found: ID does not exist" containerID="e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.063156 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893"} err="failed to get container status \"e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893\": rpc error: code = NotFound desc = could not find container \"e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893\": container with ID starting with e6777da38b1feaa51c3218534fec077a206fe5d9a42f230513b911edac9f1893 not found: ID does not exist" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.063183 4760 scope.go:117] "RemoveContainer" containerID="051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c" Nov 25 10:18:06 crc kubenswrapper[4760]: E1125 10:18:06.063711 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c\": container with ID starting with 051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c not found: ID does not exist" containerID="051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.063755 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c"} err="failed to get container status \"051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c\": rpc error: code = NotFound desc = could not find container \"051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c\": container with ID starting with 051fda96f7b89f7813e3ce8d11975a8852b367ba87c9d32ab50d64550c36194c not found: ID does not exist" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.063774 4760 scope.go:117] "RemoveContainer" containerID="2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709" Nov 25 10:18:06 crc kubenswrapper[4760]: E1125 10:18:06.064080 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709\": container with ID starting with 2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709 not found: ID does not exist" containerID="2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.064115 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709"} err="failed to get container status \"2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709\": rpc error: code = NotFound desc = could not find container \"2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709\": container with ID starting with 2cc8b81d24666934ccd29943dfe2e9c6bd60e22f6f7d11f4c856a8187b70b709 not found: ID does not exist" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.068043 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e69e4bd-a3b5-4db5-9444-bc26acbf1337" (UID: "1e69e4bd-a3b5-4db5-9444-bc26acbf1337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.145432 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.145475 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.145489 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s5f4\" (UniqueName: \"kubernetes.io/projected/1e69e4bd-a3b5-4db5-9444-bc26acbf1337-kube-api-access-6s5f4\") on node \"crc\" DevicePath \"\"" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.279526 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkf5d"] Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.289625 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkf5d"] Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.950773 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" path="/var/lib/kubelet/pods/1e69e4bd-a3b5-4db5-9444-bc26acbf1337/volumes" Nov 25 10:18:06 crc kubenswrapper[4760]: I1125 10:18:06.951956 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4b1498-7160-4f2f-9973-12383906a016" path="/var/lib/kubelet/pods/5d4b1498-7160-4f2f-9973-12383906a016/volumes" Nov 25 10:18:12 crc kubenswrapper[4760]: I1125 10:18:12.675750 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-wdjm7_e911dae6-d9ed-40d3-802a-e536e5258829/kube-rbac-proxy/0.log" Nov 25 10:18:12 crc kubenswrapper[4760]: I1125 10:18:12.751908 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-wdjm7_e911dae6-d9ed-40d3-802a-e536e5258829/controller/0.log" Nov 25 10:18:12 crc kubenswrapper[4760]: I1125 10:18:12.847529 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.050611 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.066612 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.071729 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.093500 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.238591 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.277451 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.281614 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.312265 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.466064 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.514010 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.520764 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.537120 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/controller/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.721044 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/frr-metrics/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.751585 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/kube-rbac-proxy-frr/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.763517 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/kube-rbac-proxy/0.log" Nov 25 10:18:13 crc kubenswrapper[4760]: I1125 10:18:13.923670 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/reloader/0.log" Nov 25 10:18:14 crc kubenswrapper[4760]: I1125 10:18:14.033053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-fzx95_3531211f-bf66-45cb-9c5f-4a7aca2efbad/frr-k8s-webhook-server/0.log" Nov 25 10:18:14 crc kubenswrapper[4760]: I1125 10:18:14.256262 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76784bbdf-m7z64_394da4a0-f1c0-45c3-a31b-9cace1180c53/manager/2.log" Nov 25 10:18:14 crc kubenswrapper[4760]: I1125 10:18:14.259171 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76784bbdf-m7z64_394da4a0-f1c0-45c3-a31b-9cace1180c53/manager/3.log" Nov 25 10:18:14 crc kubenswrapper[4760]: I1125 10:18:14.476328 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-547776db9-454dl_0f1ca361-a3c2-45c2-86ef-a32c06fe6476/webhook-server/0.log" Nov 25 10:18:14 crc kubenswrapper[4760]: I1125 10:18:14.642270 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m2nhl_44dac91a-5352-4392-ab9b-49c59e38409f/kube-rbac-proxy/0.log" Nov 25 10:18:15 crc kubenswrapper[4760]: I1125 10:18:15.368398 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m2nhl_44dac91a-5352-4392-ab9b-49c59e38409f/speaker/0.log" Nov 25 10:18:15 crc kubenswrapper[4760]: I1125 10:18:15.983175 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/frr/0.log" Nov 25 10:18:17 crc kubenswrapper[4760]: I1125 10:18:17.938216 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:18:17 crc kubenswrapper[4760]: E1125 10:18:17.939726 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.117359 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/util/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.273084 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/util/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.274415 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/pull/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.293130 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/pull/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.451476 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/util/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.491839 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/pull/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.534797 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/extract/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.647755 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-utilities/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.804152 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-utilities/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.819152 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-content/0.log" Nov 25 10:18:27 crc kubenswrapper[4760]: I1125 10:18:27.844617 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-content/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.087825 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-utilities/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.105570 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-content/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.292298 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-utilities/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.548836 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-content/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.584224 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-content/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.616574 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-utilities/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.770206 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/registry-server/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.859185 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-content/0.log" Nov 25 10:18:28 crc kubenswrapper[4760]: I1125 10:18:28.865492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-utilities/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.079901 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/util/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.281362 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/util/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.315887 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/pull/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.342595 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/pull/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.600987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/pull/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.619178 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/util/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.645111 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/extract/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.907492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8s28s_613c9059-f285-4892-96c6-e27686513a0a/marketplace-operator/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.909106 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/registry-server/0.log" Nov 25 10:18:29 crc kubenswrapper[4760]: I1125 10:18:29.938847 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:18:29 crc kubenswrapper[4760]: E1125 10:18:29.939070 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.063611 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-utilities/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.219672 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-utilities/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.222023 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-content/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.255070 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-content/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.410206 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-utilities/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.430144 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-content/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.659477 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-utilities/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.746352 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/registry-server/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.825016 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-utilities/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.856329 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-content/0.log" Nov 25 10:18:30 crc kubenswrapper[4760]: I1125 10:18:30.883083 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-content/0.log" Nov 25 10:18:31 crc kubenswrapper[4760]: I1125 10:18:31.040425 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-content/0.log" Nov 25 10:18:31 crc kubenswrapper[4760]: I1125 10:18:31.063750 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-utilities/0.log" Nov 25 10:18:31 crc kubenswrapper[4760]: I1125 10:18:31.947555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/registry-server/0.log" Nov 25 10:18:44 crc kubenswrapper[4760]: I1125 10:18:44.942173 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:18:44 crc kubenswrapper[4760]: E1125 10:18:44.942974 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:18:55 crc kubenswrapper[4760]: E1125 10:18:55.938414 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:18:58 crc kubenswrapper[4760]: I1125 10:18:58.939092 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:18:58 crc kubenswrapper[4760]: E1125 10:18:58.939842 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:19:09 crc kubenswrapper[4760]: E1125 10:19:09.433592 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:50496->38.129.56.21:33427: write tcp 38.129.56.21:50496->38.129.56.21:33427: write: broken pipe Nov 25 10:19:12 crc kubenswrapper[4760]: I1125 10:19:12.938467 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:19:12 crc kubenswrapper[4760]: E1125 10:19:12.939219 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:19:27 crc kubenswrapper[4760]: I1125 10:19:27.938933 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:19:27 crc kubenswrapper[4760]: E1125 10:19:27.939788 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:19:39 crc kubenswrapper[4760]: I1125 10:19:39.939071 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:19:39 crc kubenswrapper[4760]: E1125 10:19:39.939699 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:19:51 crc kubenswrapper[4760]: I1125 10:19:51.938160 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:19:51 crc kubenswrapper[4760]: E1125 10:19:51.938829 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:20:02 crc kubenswrapper[4760]: E1125 10:20:02.943616 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:20:04 crc kubenswrapper[4760]: I1125 10:20:04.940014 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:20:06 crc kubenswrapper[4760]: I1125 10:20:06.107503 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"780ed74759efa35b2aca3b56abb3e29894df1c2c3771dd97b1caa752192dc819"} Nov 25 10:20:48 crc kubenswrapper[4760]: I1125 10:20:48.586814 4760 generic.go:334] "Generic (PLEG): container finished" podID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerID="b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633" exitCode=0 Nov 25 10:20:48 crc kubenswrapper[4760]: I1125 10:20:48.586924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x8nbl/must-gather-92n9p" event={"ID":"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1","Type":"ContainerDied","Data":"b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633"} Nov 25 10:20:48 crc kubenswrapper[4760]: I1125 10:20:48.588192 4760 scope.go:117] "RemoveContainer" containerID="b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633" Nov 25 10:20:48 crc kubenswrapper[4760]: I1125 10:20:48.716042 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x8nbl_must-gather-92n9p_e2129c9e-4e0d-4841-abb7-fa0d4271a3a1/gather/0.log" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.161330 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x8nbl/must-gather-92n9p"] Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.162159 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x8nbl/must-gather-92n9p" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerName="copy" containerID="cri-o://e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48" gracePeriod=2 Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.172859 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x8nbl/must-gather-92n9p"] Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.651585 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x8nbl_must-gather-92n9p_e2129c9e-4e0d-4841-abb7-fa0d4271a3a1/copy/0.log" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.652207 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.676576 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-must-gather-output\") pod \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.677184 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x7sx\" (UniqueName: \"kubernetes.io/projected/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-kube-api-access-7x7sx\") pod \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\" (UID: \"e2129c9e-4e0d-4841-abb7-fa0d4271a3a1\") " Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.678691 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x8nbl_must-gather-92n9p_e2129c9e-4e0d-4841-abb7-fa0d4271a3a1/copy/0.log" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.679210 4760 generic.go:334] "Generic (PLEG): container finished" podID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerID="e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48" exitCode=143 Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.679282 4760 scope.go:117] "RemoveContainer" containerID="e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.679422 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x8nbl/must-gather-92n9p" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.687475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-kube-api-access-7x7sx" (OuterVolumeSpecName: "kube-api-access-7x7sx") pod "e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" (UID: "e2129c9e-4e0d-4841-abb7-fa0d4271a3a1"). InnerVolumeSpecName "kube-api-access-7x7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.741221 4760 scope.go:117] "RemoveContainer" containerID="b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.780620 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x7sx\" (UniqueName: \"kubernetes.io/projected/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-kube-api-access-7x7sx\") on node \"crc\" DevicePath \"\"" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.833997 4760 scope.go:117] "RemoveContainer" containerID="e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48" Nov 25 10:20:57 crc kubenswrapper[4760]: E1125 10:20:57.835999 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48\": container with ID starting with e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48 not found: ID does not exist" containerID="e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.836034 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48"} err="failed to get container status \"e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48\": rpc error: code = NotFound desc = could not find container \"e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48\": container with ID starting with e20da8881d0fdae91de30f8d5ca33bce3b6a91f65e1ac516796befdc6489fb48 not found: ID does not exist" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.836058 4760 scope.go:117] "RemoveContainer" containerID="b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633" Nov 25 10:20:57 crc kubenswrapper[4760]: E1125 10:20:57.836366 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633\": container with ID starting with b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633 not found: ID does not exist" containerID="b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.836525 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633"} err="failed to get container status \"b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633\": rpc error: code = NotFound desc = could not find container \"b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633\": container with ID starting with b665b9cddf96d496aa92a8bc6c86e10205238eede1c88712bb4f11ae8c2dd633 not found: ID does not exist" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.896165 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" (UID: "e2129c9e-4e0d-4841-abb7-fa0d4271a3a1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:20:57 crc kubenswrapper[4760]: I1125 10:20:57.984328 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 10:20:58 crc kubenswrapper[4760]: I1125 10:20:58.960280 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" path="/var/lib/kubelet/pods/e2129c9e-4e0d-4841-abb7-fa0d4271a3a1/volumes" Nov 25 10:21:15 crc kubenswrapper[4760]: E1125 10:21:15.939579 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:21:36 crc kubenswrapper[4760]: I1125 10:21:36.938374 4760 scope.go:117] "RemoveContainer" containerID="c828b7042ee676c0f3ba4820fb021a2c2b26f1ac366cda7d627346e68d2cfa16" Nov 25 10:22:31 crc kubenswrapper[4760]: I1125 10:22:31.755267 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:22:31 crc kubenswrapper[4760]: I1125 10:22:31.755804 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:22:35 crc kubenswrapper[4760]: E1125 10:22:35.939001 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.961146 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62zj9"] Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962391 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="extract-utilities" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962410 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="extract-utilities" Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962437 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="extract-utilities" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962445 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="extract-utilities" Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962472 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerName="copy" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962479 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerName="copy" Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="registry-server" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962500 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="registry-server" Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962514 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="extract-content" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962522 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="extract-content" Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962555 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerName="gather" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962562 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerName="gather" Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962578 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="extract-content" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962590 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="extract-content" Nov 25 10:22:45 crc kubenswrapper[4760]: E1125 10:22:45.962606 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="registry-server" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962613 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="registry-server" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962851 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4b1498-7160-4f2f-9973-12383906a016" containerName="registry-server" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962879 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerName="copy" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962888 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2129c9e-4e0d-4841-abb7-fa0d4271a3a1" containerName="gather" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.962898 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e69e4bd-a3b5-4db5-9444-bc26acbf1337" containerName="registry-server" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.964799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:45 crc kubenswrapper[4760]: I1125 10:22:45.975586 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62zj9"] Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.103929 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-catalog-content\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.104103 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svq9f\" (UniqueName: \"kubernetes.io/projected/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-kube-api-access-svq9f\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.104157 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-utilities\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.206572 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-catalog-content\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.206707 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svq9f\" (UniqueName: \"kubernetes.io/projected/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-kube-api-access-svq9f\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.206769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-utilities\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.207378 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-catalog-content\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.208057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-utilities\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.232530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svq9f\" (UniqueName: \"kubernetes.io/projected/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-kube-api-access-svq9f\") pod \"certified-operators-62zj9\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:46 crc kubenswrapper[4760]: I1125 10:22:46.289129 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:47 crc kubenswrapper[4760]: I1125 10:22:47.040812 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62zj9"] Nov 25 10:22:47 crc kubenswrapper[4760]: I1125 10:22:47.940353 4760 generic.go:334] "Generic (PLEG): container finished" podID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerID="8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f" exitCode=0 Nov 25 10:22:47 crc kubenswrapper[4760]: I1125 10:22:47.940433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zj9" event={"ID":"497425cd-f4df-49bb-aa7a-5ad8b4f339f8","Type":"ContainerDied","Data":"8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f"} Nov 25 10:22:47 crc kubenswrapper[4760]: I1125 10:22:47.940671 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zj9" event={"ID":"497425cd-f4df-49bb-aa7a-5ad8b4f339f8","Type":"ContainerStarted","Data":"ab8bdbe3d05e76ad2ef776e5220a02684b6c1ca421cb754686fee806d079e3bd"} Nov 25 10:22:47 crc kubenswrapper[4760]: I1125 10:22:47.942651 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:22:49 crc kubenswrapper[4760]: I1125 10:22:49.959763 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zj9" event={"ID":"497425cd-f4df-49bb-aa7a-5ad8b4f339f8","Type":"ContainerStarted","Data":"50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce"} Nov 25 10:22:50 crc kubenswrapper[4760]: I1125 10:22:50.970161 4760 generic.go:334] "Generic (PLEG): container finished" podID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerID="50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce" exitCode=0 Nov 25 10:22:50 crc kubenswrapper[4760]: I1125 10:22:50.970207 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zj9" event={"ID":"497425cd-f4df-49bb-aa7a-5ad8b4f339f8","Type":"ContainerDied","Data":"50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce"} Nov 25 10:22:51 crc kubenswrapper[4760]: I1125 10:22:51.990585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zj9" event={"ID":"497425cd-f4df-49bb-aa7a-5ad8b4f339f8","Type":"ContainerStarted","Data":"054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226"} Nov 25 10:22:52 crc kubenswrapper[4760]: I1125 10:22:52.020200 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62zj9" podStartSLOduration=3.528575999 podStartE2EDuration="7.020179341s" podCreationTimestamp="2025-11-25 10:22:45 +0000 UTC" firstStartedPulling="2025-11-25 10:22:47.942346858 +0000 UTC m=+7901.651377653" lastFinishedPulling="2025-11-25 10:22:51.4339502 +0000 UTC m=+7905.142980995" observedRunningTime="2025-11-25 10:22:52.010006012 +0000 UTC m=+7905.719036827" watchObservedRunningTime="2025-11-25 10:22:52.020179341 +0000 UTC m=+7905.729210136" Nov 25 10:22:56 crc kubenswrapper[4760]: I1125 10:22:56.290495 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:56 crc kubenswrapper[4760]: I1125 10:22:56.291150 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:56 crc kubenswrapper[4760]: I1125 10:22:56.342216 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:57 crc kubenswrapper[4760]: I1125 10:22:57.097783 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:57 crc kubenswrapper[4760]: I1125 10:22:57.153836 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62zj9"] Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.049585 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62zj9" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="registry-server" containerID="cri-o://054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226" gracePeriod=2 Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.544433 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.700534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svq9f\" (UniqueName: \"kubernetes.io/projected/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-kube-api-access-svq9f\") pod \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.700996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-utilities\") pod \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.701142 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-catalog-content\") pod \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\" (UID: \"497425cd-f4df-49bb-aa7a-5ad8b4f339f8\") " Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.701891 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-utilities" (OuterVolumeSpecName: "utilities") pod "497425cd-f4df-49bb-aa7a-5ad8b4f339f8" (UID: "497425cd-f4df-49bb-aa7a-5ad8b4f339f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.706330 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-kube-api-access-svq9f" (OuterVolumeSpecName: "kube-api-access-svq9f") pod "497425cd-f4df-49bb-aa7a-5ad8b4f339f8" (UID: "497425cd-f4df-49bb-aa7a-5ad8b4f339f8"). InnerVolumeSpecName "kube-api-access-svq9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.771875 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "497425cd-f4df-49bb-aa7a-5ad8b4f339f8" (UID: "497425cd-f4df-49bb-aa7a-5ad8b4f339f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.803977 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svq9f\" (UniqueName: \"kubernetes.io/projected/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-kube-api-access-svq9f\") on node \"crc\" DevicePath \"\"" Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.804012 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:22:59 crc kubenswrapper[4760]: I1125 10:22:59.804022 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/497425cd-f4df-49bb-aa7a-5ad8b4f339f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.062088 4760 generic.go:334] "Generic (PLEG): container finished" podID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerID="054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226" exitCode=0 Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.062185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zj9" event={"ID":"497425cd-f4df-49bb-aa7a-5ad8b4f339f8","Type":"ContainerDied","Data":"054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226"} Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.062475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62zj9" event={"ID":"497425cd-f4df-49bb-aa7a-5ad8b4f339f8","Type":"ContainerDied","Data":"ab8bdbe3d05e76ad2ef776e5220a02684b6c1ca421cb754686fee806d079e3bd"} Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.062498 4760 scope.go:117] "RemoveContainer" containerID="054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.062225 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62zj9" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.104977 4760 scope.go:117] "RemoveContainer" containerID="50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.111808 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62zj9"] Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.121353 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62zj9"] Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.127703 4760 scope.go:117] "RemoveContainer" containerID="8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.187099 4760 scope.go:117] "RemoveContainer" containerID="054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226" Nov 25 10:23:00 crc kubenswrapper[4760]: E1125 10:23:00.187658 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226\": container with ID starting with 054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226 not found: ID does not exist" containerID="054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.187694 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226"} err="failed to get container status \"054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226\": rpc error: code = NotFound desc = could not find container \"054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226\": container with ID starting with 054e5808fe50a5d419ec2e72dd48729f06b4e54a6c93ad815b78abfafd408226 not found: ID does not exist" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.187716 4760 scope.go:117] "RemoveContainer" containerID="50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce" Nov 25 10:23:00 crc kubenswrapper[4760]: E1125 10:23:00.188093 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce\": container with ID starting with 50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce not found: ID does not exist" containerID="50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.188143 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce"} err="failed to get container status \"50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce\": rpc error: code = NotFound desc = could not find container \"50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce\": container with ID starting with 50a952ca3c16508339354435484f5784f703e187358176e882274d98f55383ce not found: ID does not exist" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.188172 4760 scope.go:117] "RemoveContainer" containerID="8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f" Nov 25 10:23:00 crc kubenswrapper[4760]: E1125 10:23:00.188526 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f\": container with ID starting with 8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f not found: ID does not exist" containerID="8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.188559 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f"} err="failed to get container status \"8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f\": rpc error: code = NotFound desc = could not find container \"8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f\": container with ID starting with 8a3e41569070cd4e9ad2beb85764c81e0d906eaf6255db79ac7462ca069ef62f not found: ID does not exist" Nov 25 10:23:00 crc kubenswrapper[4760]: I1125 10:23:00.957964 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" path="/var/lib/kubelet/pods/497425cd-f4df-49bb-aa7a-5ad8b4f339f8/volumes" Nov 25 10:23:01 crc kubenswrapper[4760]: I1125 10:23:01.745955 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:23:01 crc kubenswrapper[4760]: I1125 10:23:01.746269 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:23:31 crc kubenswrapper[4760]: I1125 10:23:31.745941 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:23:31 crc kubenswrapper[4760]: I1125 10:23:31.746627 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:23:31 crc kubenswrapper[4760]: I1125 10:23:31.746712 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 10:23:31 crc kubenswrapper[4760]: I1125 10:23:31.747915 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"780ed74759efa35b2aca3b56abb3e29894df1c2c3771dd97b1caa752192dc819"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:23:31 crc kubenswrapper[4760]: I1125 10:23:31.747987 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://780ed74759efa35b2aca3b56abb3e29894df1c2c3771dd97b1caa752192dc819" gracePeriod=600 Nov 25 10:23:32 crc kubenswrapper[4760]: I1125 10:23:32.448118 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="780ed74759efa35b2aca3b56abb3e29894df1c2c3771dd97b1caa752192dc819" exitCode=0 Nov 25 10:23:32 crc kubenswrapper[4760]: I1125 10:23:32.448299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"780ed74759efa35b2aca3b56abb3e29894df1c2c3771dd97b1caa752192dc819"} Nov 25 10:23:32 crc kubenswrapper[4760]: I1125 10:23:32.448741 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905"} Nov 25 10:23:32 crc kubenswrapper[4760]: I1125 10:23:32.448768 4760 scope.go:117] "RemoveContainer" containerID="7cb305dab40c09cc90e04875c581add8f5d0f7f6584898779dcc923d70205b98" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.095857 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68bv9"] Nov 25 10:23:40 crc kubenswrapper[4760]: E1125 10:23:40.097195 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="extract-utilities" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.097223 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="extract-utilities" Nov 25 10:23:40 crc kubenswrapper[4760]: E1125 10:23:40.097305 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="registry-server" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.097320 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="registry-server" Nov 25 10:23:40 crc kubenswrapper[4760]: E1125 10:23:40.097347 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="extract-content" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.097360 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="extract-content" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.097722 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="497425cd-f4df-49bb-aa7a-5ad8b4f339f8" containerName="registry-server" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.100226 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.108588 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68bv9"] Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.153316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-catalog-content\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.153492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crwnp\" (UniqueName: \"kubernetes.io/projected/24e62577-f67f-419d-91ee-4a5cc90aa47c-kube-api-access-crwnp\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.153557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-utilities\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.254812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-catalog-content\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.254938 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crwnp\" (UniqueName: \"kubernetes.io/projected/24e62577-f67f-419d-91ee-4a5cc90aa47c-kube-api-access-crwnp\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.254998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-utilities\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.255543 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-catalog-content\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.255609 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-utilities\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.297209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crwnp\" (UniqueName: \"kubernetes.io/projected/24e62577-f67f-419d-91ee-4a5cc90aa47c-kube-api-access-crwnp\") pod \"redhat-operators-68bv9\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.430979 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:40 crc kubenswrapper[4760]: I1125 10:23:40.926456 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68bv9"] Nov 25 10:23:41 crc kubenswrapper[4760]: I1125 10:23:41.544526 4760 generic.go:334] "Generic (PLEG): container finished" podID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerID="2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098" exitCode=0 Nov 25 10:23:41 crc kubenswrapper[4760]: I1125 10:23:41.544827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68bv9" event={"ID":"24e62577-f67f-419d-91ee-4a5cc90aa47c","Type":"ContainerDied","Data":"2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098"} Nov 25 10:23:41 crc kubenswrapper[4760]: I1125 10:23:41.544856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68bv9" event={"ID":"24e62577-f67f-419d-91ee-4a5cc90aa47c","Type":"ContainerStarted","Data":"1f5907e0dbaeaf7c30ee34b3eabcc50bcb148f271bc1a735705a79c1211787fc"} Nov 25 10:23:43 crc kubenswrapper[4760]: I1125 10:23:43.565980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68bv9" event={"ID":"24e62577-f67f-419d-91ee-4a5cc90aa47c","Type":"ContainerStarted","Data":"006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b"} Nov 25 10:23:45 crc kubenswrapper[4760]: I1125 10:23:45.590576 4760 generic.go:334] "Generic (PLEG): container finished" podID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerID="006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b" exitCode=0 Nov 25 10:23:45 crc kubenswrapper[4760]: I1125 10:23:45.590679 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68bv9" event={"ID":"24e62577-f67f-419d-91ee-4a5cc90aa47c","Type":"ContainerDied","Data":"006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b"} Nov 25 10:23:46 crc kubenswrapper[4760]: I1125 10:23:46.604458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68bv9" event={"ID":"24e62577-f67f-419d-91ee-4a5cc90aa47c","Type":"ContainerStarted","Data":"3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88"} Nov 25 10:23:46 crc kubenswrapper[4760]: I1125 10:23:46.627441 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68bv9" podStartSLOduration=2.121659408 podStartE2EDuration="6.627420992s" podCreationTimestamp="2025-11-25 10:23:40 +0000 UTC" firstStartedPulling="2025-11-25 10:23:41.555198759 +0000 UTC m=+7955.264229554" lastFinishedPulling="2025-11-25 10:23:46.060960343 +0000 UTC m=+7959.769991138" observedRunningTime="2025-11-25 10:23:46.620784713 +0000 UTC m=+7960.329815518" watchObservedRunningTime="2025-11-25 10:23:46.627420992 +0000 UTC m=+7960.336451777" Nov 25 10:23:50 crc kubenswrapper[4760]: I1125 10:23:50.431135 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:50 crc kubenswrapper[4760]: I1125 10:23:50.432536 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:23:51 crc kubenswrapper[4760]: I1125 10:23:51.480198 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68bv9" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="registry-server" probeResult="failure" output=< Nov 25 10:23:51 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Nov 25 10:23:51 crc kubenswrapper[4760]: > Nov 25 10:23:58 crc kubenswrapper[4760]: I1125 10:23:58.865090 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5lmg8/must-gather-p6glf"] Nov 25 10:23:58 crc kubenswrapper[4760]: I1125 10:23:58.869700 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:58 crc kubenswrapper[4760]: I1125 10:23:58.882908 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5lmg8"/"openshift-service-ca.crt" Nov 25 10:23:58 crc kubenswrapper[4760]: I1125 10:23:58.882926 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5lmg8"/"kube-root-ca.crt" Nov 25 10:23:58 crc kubenswrapper[4760]: I1125 10:23:58.895757 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5lmg8/must-gather-p6glf"] Nov 25 10:23:58 crc kubenswrapper[4760]: I1125 10:23:58.981094 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pscr\" (UniqueName: \"kubernetes.io/projected/476479c3-79d3-4f4a-92c6-95e623dddb3d-kube-api-access-8pscr\") pod \"must-gather-p6glf\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:58 crc kubenswrapper[4760]: I1125 10:23:58.981352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/476479c3-79d3-4f4a-92c6-95e623dddb3d-must-gather-output\") pod \"must-gather-p6glf\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:59 crc kubenswrapper[4760]: I1125 10:23:59.082805 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pscr\" (UniqueName: \"kubernetes.io/projected/476479c3-79d3-4f4a-92c6-95e623dddb3d-kube-api-access-8pscr\") pod \"must-gather-p6glf\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:59 crc kubenswrapper[4760]: I1125 10:23:59.083032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/476479c3-79d3-4f4a-92c6-95e623dddb3d-must-gather-output\") pod \"must-gather-p6glf\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:59 crc kubenswrapper[4760]: I1125 10:23:59.083551 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/476479c3-79d3-4f4a-92c6-95e623dddb3d-must-gather-output\") pod \"must-gather-p6glf\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:59 crc kubenswrapper[4760]: I1125 10:23:59.103795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pscr\" (UniqueName: \"kubernetes.io/projected/476479c3-79d3-4f4a-92c6-95e623dddb3d-kube-api-access-8pscr\") pod \"must-gather-p6glf\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:59 crc kubenswrapper[4760]: I1125 10:23:59.201109 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:23:59 crc kubenswrapper[4760]: W1125 10:23:59.700276 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod476479c3_79d3_4f4a_92c6_95e623dddb3d.slice/crio-750f9df8615ec7c1a88cadeaf66ba9f0ba241b63cb7b827f091eca9440e41c3c WatchSource:0}: Error finding container 750f9df8615ec7c1a88cadeaf66ba9f0ba241b63cb7b827f091eca9440e41c3c: Status 404 returned error can't find the container with id 750f9df8615ec7c1a88cadeaf66ba9f0ba241b63cb7b827f091eca9440e41c3c Nov 25 10:23:59 crc kubenswrapper[4760]: I1125 10:23:59.700884 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5lmg8/must-gather-p6glf"] Nov 25 10:23:59 crc kubenswrapper[4760]: I1125 10:23:59.723385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/must-gather-p6glf" event={"ID":"476479c3-79d3-4f4a-92c6-95e623dddb3d","Type":"ContainerStarted","Data":"750f9df8615ec7c1a88cadeaf66ba9f0ba241b63cb7b827f091eca9440e41c3c"} Nov 25 10:24:00 crc kubenswrapper[4760]: I1125 10:24:00.490385 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:24:00 crc kubenswrapper[4760]: I1125 10:24:00.551353 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:24:00 crc kubenswrapper[4760]: I1125 10:24:00.734594 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68bv9"] Nov 25 10:24:00 crc kubenswrapper[4760]: I1125 10:24:00.741989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/must-gather-p6glf" event={"ID":"476479c3-79d3-4f4a-92c6-95e623dddb3d","Type":"ContainerStarted","Data":"51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5"} Nov 25 10:24:00 crc kubenswrapper[4760]: I1125 10:24:00.742023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/must-gather-p6glf" event={"ID":"476479c3-79d3-4f4a-92c6-95e623dddb3d","Type":"ContainerStarted","Data":"276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e"} Nov 25 10:24:00 crc kubenswrapper[4760]: I1125 10:24:00.763293 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5lmg8/must-gather-p6glf" podStartSLOduration=2.763269073 podStartE2EDuration="2.763269073s" podCreationTimestamp="2025-11-25 10:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:24:00.756936783 +0000 UTC m=+7974.465967588" watchObservedRunningTime="2025-11-25 10:24:00.763269073 +0000 UTC m=+7974.472299868" Nov 25 10:24:01 crc kubenswrapper[4760]: I1125 10:24:01.748866 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68bv9" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="registry-server" containerID="cri-o://3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88" gracePeriod=2 Nov 25 10:24:01 crc kubenswrapper[4760]: E1125 10:24:01.939010 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.382750 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.463215 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-catalog-content\") pod \"24e62577-f67f-419d-91ee-4a5cc90aa47c\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.463359 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crwnp\" (UniqueName: \"kubernetes.io/projected/24e62577-f67f-419d-91ee-4a5cc90aa47c-kube-api-access-crwnp\") pod \"24e62577-f67f-419d-91ee-4a5cc90aa47c\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.463391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-utilities\") pod \"24e62577-f67f-419d-91ee-4a5cc90aa47c\" (UID: \"24e62577-f67f-419d-91ee-4a5cc90aa47c\") " Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.464943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-utilities" (OuterVolumeSpecName: "utilities") pod "24e62577-f67f-419d-91ee-4a5cc90aa47c" (UID: "24e62577-f67f-419d-91ee-4a5cc90aa47c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.479519 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e62577-f67f-419d-91ee-4a5cc90aa47c-kube-api-access-crwnp" (OuterVolumeSpecName: "kube-api-access-crwnp") pod "24e62577-f67f-419d-91ee-4a5cc90aa47c" (UID: "24e62577-f67f-419d-91ee-4a5cc90aa47c"). InnerVolumeSpecName "kube-api-access-crwnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.561131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24e62577-f67f-419d-91ee-4a5cc90aa47c" (UID: "24e62577-f67f-419d-91ee-4a5cc90aa47c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.566365 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.566396 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crwnp\" (UniqueName: \"kubernetes.io/projected/24e62577-f67f-419d-91ee-4a5cc90aa47c-kube-api-access-crwnp\") on node \"crc\" DevicePath \"\"" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.566410 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e62577-f67f-419d-91ee-4a5cc90aa47c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.765082 4760 generic.go:334] "Generic (PLEG): container finished" podID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerID="3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88" exitCode=0 Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.765125 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68bv9" event={"ID":"24e62577-f67f-419d-91ee-4a5cc90aa47c","Type":"ContainerDied","Data":"3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88"} Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.765152 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68bv9" event={"ID":"24e62577-f67f-419d-91ee-4a5cc90aa47c","Type":"ContainerDied","Data":"1f5907e0dbaeaf7c30ee34b3eabcc50bcb148f271bc1a735705a79c1211787fc"} Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.765170 4760 scope.go:117] "RemoveContainer" containerID="3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.765665 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68bv9" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.799787 4760 scope.go:117] "RemoveContainer" containerID="006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.830480 4760 scope.go:117] "RemoveContainer" containerID="2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.833743 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68bv9"] Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.849907 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68bv9"] Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.877554 4760 scope.go:117] "RemoveContainer" containerID="3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88" Nov 25 10:24:02 crc kubenswrapper[4760]: E1125 10:24:02.878105 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88\": container with ID starting with 3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88 not found: ID does not exist" containerID="3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.878135 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88"} err="failed to get container status \"3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88\": rpc error: code = NotFound desc = could not find container \"3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88\": container with ID starting with 3bec0cb4104b76f5aef6880b8df411b6a8b31a91c2b34fff2bc0c6f0c3ac7d88 not found: ID does not exist" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.878161 4760 scope.go:117] "RemoveContainer" containerID="006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b" Nov 25 10:24:02 crc kubenswrapper[4760]: E1125 10:24:02.878603 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b\": container with ID starting with 006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b not found: ID does not exist" containerID="006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.878631 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b"} err="failed to get container status \"006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b\": rpc error: code = NotFound desc = could not find container \"006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b\": container with ID starting with 006b9fe8a51dd243586979c754e3ce9b87f5315873bf216230560485194ceb7b not found: ID does not exist" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.878646 4760 scope.go:117] "RemoveContainer" containerID="2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098" Nov 25 10:24:02 crc kubenswrapper[4760]: E1125 10:24:02.878933 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098\": container with ID starting with 2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098 not found: ID does not exist" containerID="2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.878952 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098"} err="failed to get container status \"2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098\": rpc error: code = NotFound desc = could not find container \"2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098\": container with ID starting with 2e3107e3b11070223134e8944a1cbf7ae2b68e1ec85e7f511d1cebfbfa3cc098 not found: ID does not exist" Nov 25 10:24:02 crc kubenswrapper[4760]: I1125 10:24:02.967381 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" path="/var/lib/kubelet/pods/24e62577-f67f-419d-91ee-4a5cc90aa47c/volumes" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.748536 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-mf8bd"] Nov 25 10:24:04 crc kubenswrapper[4760]: E1125 10:24:04.749631 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="registry-server" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.749648 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="registry-server" Nov 25 10:24:04 crc kubenswrapper[4760]: E1125 10:24:04.749665 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="extract-content" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.749672 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="extract-content" Nov 25 10:24:04 crc kubenswrapper[4760]: E1125 10:24:04.749686 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="extract-utilities" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.749693 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="extract-utilities" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.749910 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e62577-f67f-419d-91ee-4a5cc90aa47c" containerName="registry-server" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.750768 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.753568 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5lmg8"/"default-dockercfg-xps5q" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.816080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgsh8\" (UniqueName: \"kubernetes.io/projected/5002a4b2-404c-4531-8ef6-28597a6da28f-kube-api-access-cgsh8\") pod \"crc-debug-mf8bd\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.816513 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5002a4b2-404c-4531-8ef6-28597a6da28f-host\") pod \"crc-debug-mf8bd\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.918390 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgsh8\" (UniqueName: \"kubernetes.io/projected/5002a4b2-404c-4531-8ef6-28597a6da28f-kube-api-access-cgsh8\") pod \"crc-debug-mf8bd\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.918476 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5002a4b2-404c-4531-8ef6-28597a6da28f-host\") pod \"crc-debug-mf8bd\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.918654 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5002a4b2-404c-4531-8ef6-28597a6da28f-host\") pod \"crc-debug-mf8bd\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:04 crc kubenswrapper[4760]: I1125 10:24:04.939541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgsh8\" (UniqueName: \"kubernetes.io/projected/5002a4b2-404c-4531-8ef6-28597a6da28f-kube-api-access-cgsh8\") pod \"crc-debug-mf8bd\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:05 crc kubenswrapper[4760]: I1125 10:24:05.071682 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:05 crc kubenswrapper[4760]: I1125 10:24:05.798332 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" event={"ID":"5002a4b2-404c-4531-8ef6-28597a6da28f","Type":"ContainerStarted","Data":"8499f4b50983c42d806f726998cb5b93dd50dfa16af9da5742b87b91b260a362"} Nov 25 10:24:05 crc kubenswrapper[4760]: I1125 10:24:05.799027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" event={"ID":"5002a4b2-404c-4531-8ef6-28597a6da28f","Type":"ContainerStarted","Data":"5b690ad04742635910030e533ae1415704331dc79c5d322e4de236e743a7bbac"} Nov 25 10:24:05 crc kubenswrapper[4760]: I1125 10:24:05.827063 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" podStartSLOduration=1.827042705 podStartE2EDuration="1.827042705s" podCreationTimestamp="2025-11-25 10:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 10:24:05.817190834 +0000 UTC m=+7979.526221639" watchObservedRunningTime="2025-11-25 10:24:05.827042705 +0000 UTC m=+7979.536073500" Nov 25 10:24:50 crc kubenswrapper[4760]: I1125 10:24:50.299907 4760 generic.go:334] "Generic (PLEG): container finished" podID="5002a4b2-404c-4531-8ef6-28597a6da28f" containerID="8499f4b50983c42d806f726998cb5b93dd50dfa16af9da5742b87b91b260a362" exitCode=0 Nov 25 10:24:50 crc kubenswrapper[4760]: I1125 10:24:50.300449 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" event={"ID":"5002a4b2-404c-4531-8ef6-28597a6da28f","Type":"ContainerDied","Data":"8499f4b50983c42d806f726998cb5b93dd50dfa16af9da5742b87b91b260a362"} Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.427302 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.478871 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-mf8bd"] Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.495680 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-mf8bd"] Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.583159 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgsh8\" (UniqueName: \"kubernetes.io/projected/5002a4b2-404c-4531-8ef6-28597a6da28f-kube-api-access-cgsh8\") pod \"5002a4b2-404c-4531-8ef6-28597a6da28f\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.583442 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5002a4b2-404c-4531-8ef6-28597a6da28f-host\") pod \"5002a4b2-404c-4531-8ef6-28597a6da28f\" (UID: \"5002a4b2-404c-4531-8ef6-28597a6da28f\") " Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.583923 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5002a4b2-404c-4531-8ef6-28597a6da28f-host" (OuterVolumeSpecName: "host") pod "5002a4b2-404c-4531-8ef6-28597a6da28f" (UID: "5002a4b2-404c-4531-8ef6-28597a6da28f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.584514 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5002a4b2-404c-4531-8ef6-28597a6da28f-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.597919 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5002a4b2-404c-4531-8ef6-28597a6da28f-kube-api-access-cgsh8" (OuterVolumeSpecName: "kube-api-access-cgsh8") pod "5002a4b2-404c-4531-8ef6-28597a6da28f" (UID: "5002a4b2-404c-4531-8ef6-28597a6da28f"). InnerVolumeSpecName "kube-api-access-cgsh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:24:51 crc kubenswrapper[4760]: I1125 10:24:51.688916 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgsh8\" (UniqueName: \"kubernetes.io/projected/5002a4b2-404c-4531-8ef6-28597a6da28f-kube-api-access-cgsh8\") on node \"crc\" DevicePath \"\"" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.330235 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b690ad04742635910030e533ae1415704331dc79c5d322e4de236e743a7bbac" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.330326 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-mf8bd" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.716521 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-vjq2q"] Nov 25 10:24:52 crc kubenswrapper[4760]: E1125 10:24:52.718174 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5002a4b2-404c-4531-8ef6-28597a6da28f" containerName="container-00" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.718292 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5002a4b2-404c-4531-8ef6-28597a6da28f" containerName="container-00" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.718665 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5002a4b2-404c-4531-8ef6-28597a6da28f" containerName="container-00" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.719595 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.721675 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5lmg8"/"default-dockercfg-xps5q" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.915306 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rt9\" (UniqueName: \"kubernetes.io/projected/73341970-afd0-4594-9b2f-7535202d754e-kube-api-access-72rt9\") pod \"crc-debug-vjq2q\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.915903 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73341970-afd0-4594-9b2f-7535202d754e-host\") pod \"crc-debug-vjq2q\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:52 crc kubenswrapper[4760]: I1125 10:24:52.951829 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5002a4b2-404c-4531-8ef6-28597a6da28f" path="/var/lib/kubelet/pods/5002a4b2-404c-4531-8ef6-28597a6da28f/volumes" Nov 25 10:24:53 crc kubenswrapper[4760]: I1125 10:24:53.017307 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73341970-afd0-4594-9b2f-7535202d754e-host\") pod \"crc-debug-vjq2q\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:53 crc kubenswrapper[4760]: I1125 10:24:53.017522 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73341970-afd0-4594-9b2f-7535202d754e-host\") pod \"crc-debug-vjq2q\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:53 crc kubenswrapper[4760]: I1125 10:24:53.017564 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rt9\" (UniqueName: \"kubernetes.io/projected/73341970-afd0-4594-9b2f-7535202d754e-kube-api-access-72rt9\") pod \"crc-debug-vjq2q\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:53 crc kubenswrapper[4760]: I1125 10:24:53.044639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rt9\" (UniqueName: \"kubernetes.io/projected/73341970-afd0-4594-9b2f-7535202d754e-kube-api-access-72rt9\") pod \"crc-debug-vjq2q\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:53 crc kubenswrapper[4760]: I1125 10:24:53.340050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:54 crc kubenswrapper[4760]: I1125 10:24:54.351408 4760 generic.go:334] "Generic (PLEG): container finished" podID="73341970-afd0-4594-9b2f-7535202d754e" containerID="26a4dadfaee4a3c92f96dd39e692b16909e50b290fcf3eb479fc75ea4986e6f5" exitCode=0 Nov 25 10:24:54 crc kubenswrapper[4760]: I1125 10:24:54.351607 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" event={"ID":"73341970-afd0-4594-9b2f-7535202d754e","Type":"ContainerDied","Data":"26a4dadfaee4a3c92f96dd39e692b16909e50b290fcf3eb479fc75ea4986e6f5"} Nov 25 10:24:54 crc kubenswrapper[4760]: I1125 10:24:54.353185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" event={"ID":"73341970-afd0-4594-9b2f-7535202d754e","Type":"ContainerStarted","Data":"49fe217aa0df7430fee70c0697db92f9d7465c3ff9ca1b1e5f3d0d9f0d0a3a3a"} Nov 25 10:24:55 crc kubenswrapper[4760]: I1125 10:24:55.500805 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:55 crc kubenswrapper[4760]: I1125 10:24:55.567428 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72rt9\" (UniqueName: \"kubernetes.io/projected/73341970-afd0-4594-9b2f-7535202d754e-kube-api-access-72rt9\") pod \"73341970-afd0-4594-9b2f-7535202d754e\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " Nov 25 10:24:55 crc kubenswrapper[4760]: I1125 10:24:55.567560 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73341970-afd0-4594-9b2f-7535202d754e-host\") pod \"73341970-afd0-4594-9b2f-7535202d754e\" (UID: \"73341970-afd0-4594-9b2f-7535202d754e\") " Nov 25 10:24:55 crc kubenswrapper[4760]: I1125 10:24:55.568091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73341970-afd0-4594-9b2f-7535202d754e-host" (OuterVolumeSpecName: "host") pod "73341970-afd0-4594-9b2f-7535202d754e" (UID: "73341970-afd0-4594-9b2f-7535202d754e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:24:55 crc kubenswrapper[4760]: I1125 10:24:55.575266 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73341970-afd0-4594-9b2f-7535202d754e-kube-api-access-72rt9" (OuterVolumeSpecName: "kube-api-access-72rt9") pod "73341970-afd0-4594-9b2f-7535202d754e" (UID: "73341970-afd0-4594-9b2f-7535202d754e"). InnerVolumeSpecName "kube-api-access-72rt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:24:55 crc kubenswrapper[4760]: I1125 10:24:55.669550 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72rt9\" (UniqueName: \"kubernetes.io/projected/73341970-afd0-4594-9b2f-7535202d754e-kube-api-access-72rt9\") on node \"crc\" DevicePath \"\"" Nov 25 10:24:55 crc kubenswrapper[4760]: I1125 10:24:55.669835 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73341970-afd0-4594-9b2f-7535202d754e-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:24:56 crc kubenswrapper[4760]: I1125 10:24:56.375926 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" Nov 25 10:24:56 crc kubenswrapper[4760]: I1125 10:24:56.375983 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-vjq2q" event={"ID":"73341970-afd0-4594-9b2f-7535202d754e","Type":"ContainerDied","Data":"49fe217aa0df7430fee70c0697db92f9d7465c3ff9ca1b1e5f3d0d9f0d0a3a3a"} Nov 25 10:24:56 crc kubenswrapper[4760]: I1125 10:24:56.376157 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49fe217aa0df7430fee70c0697db92f9d7465c3ff9ca1b1e5f3d0d9f0d0a3a3a" Nov 25 10:24:56 crc kubenswrapper[4760]: I1125 10:24:56.676872 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-vjq2q"] Nov 25 10:24:56 crc kubenswrapper[4760]: I1125 10:24:56.688786 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-vjq2q"] Nov 25 10:24:56 crc kubenswrapper[4760]: I1125 10:24:56.950731 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73341970-afd0-4594-9b2f-7535202d754e" path="/var/lib/kubelet/pods/73341970-afd0-4594-9b2f-7535202d754e/volumes" Nov 25 10:24:57 crc kubenswrapper[4760]: I1125 10:24:57.982859 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-bcbsg"] Nov 25 10:24:57 crc kubenswrapper[4760]: E1125 10:24:57.983579 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73341970-afd0-4594-9b2f-7535202d754e" containerName="container-00" Nov 25 10:24:57 crc kubenswrapper[4760]: I1125 10:24:57.983592 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73341970-afd0-4594-9b2f-7535202d754e" containerName="container-00" Nov 25 10:24:57 crc kubenswrapper[4760]: I1125 10:24:57.983758 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="73341970-afd0-4594-9b2f-7535202d754e" containerName="container-00" Nov 25 10:24:57 crc kubenswrapper[4760]: I1125 10:24:57.984601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:57 crc kubenswrapper[4760]: I1125 10:24:57.986417 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5lmg8"/"default-dockercfg-xps5q" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.121448 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2qpj\" (UniqueName: \"kubernetes.io/projected/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-kube-api-access-r2qpj\") pod \"crc-debug-bcbsg\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.121871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-host\") pod \"crc-debug-bcbsg\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.223904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-host\") pod \"crc-debug-bcbsg\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.224013 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2qpj\" (UniqueName: \"kubernetes.io/projected/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-kube-api-access-r2qpj\") pod \"crc-debug-bcbsg\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.224046 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-host\") pod \"crc-debug-bcbsg\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.246000 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2qpj\" (UniqueName: \"kubernetes.io/projected/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-kube-api-access-r2qpj\") pod \"crc-debug-bcbsg\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.304395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:24:58 crc kubenswrapper[4760]: I1125 10:24:58.401675 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" event={"ID":"b94ef4f5-688e-40fa-81f0-bda19b5fdda7","Type":"ContainerStarted","Data":"1ce65bd677437e9cf7bf4fe6c449502088e4416d84a20669ae0287c95f4ba83c"} Nov 25 10:24:59 crc kubenswrapper[4760]: I1125 10:24:59.412401 4760 generic.go:334] "Generic (PLEG): container finished" podID="b94ef4f5-688e-40fa-81f0-bda19b5fdda7" containerID="246a640a33702576b01a546cf4d6da1b5f709db051592a740f7388293cfdb945" exitCode=0 Nov 25 10:24:59 crc kubenswrapper[4760]: I1125 10:24:59.412447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" event={"ID":"b94ef4f5-688e-40fa-81f0-bda19b5fdda7","Type":"ContainerDied","Data":"246a640a33702576b01a546cf4d6da1b5f709db051592a740f7388293cfdb945"} Nov 25 10:24:59 crc kubenswrapper[4760]: I1125 10:24:59.458459 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-bcbsg"] Nov 25 10:24:59 crc kubenswrapper[4760]: I1125 10:24:59.467486 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5lmg8/crc-debug-bcbsg"] Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.547623 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.671114 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2qpj\" (UniqueName: \"kubernetes.io/projected/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-kube-api-access-r2qpj\") pod \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.671428 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-host\") pod \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\" (UID: \"b94ef4f5-688e-40fa-81f0-bda19b5fdda7\") " Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.672158 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-host" (OuterVolumeSpecName: "host") pod "b94ef4f5-688e-40fa-81f0-bda19b5fdda7" (UID: "b94ef4f5-688e-40fa-81f0-bda19b5fdda7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.685843 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-kube-api-access-r2qpj" (OuterVolumeSpecName: "kube-api-access-r2qpj") pod "b94ef4f5-688e-40fa-81f0-bda19b5fdda7" (UID: "b94ef4f5-688e-40fa-81f0-bda19b5fdda7"). InnerVolumeSpecName "kube-api-access-r2qpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.773712 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2qpj\" (UniqueName: \"kubernetes.io/projected/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-kube-api-access-r2qpj\") on node \"crc\" DevicePath \"\"" Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.773751 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b94ef4f5-688e-40fa-81f0-bda19b5fdda7-host\") on node \"crc\" DevicePath \"\"" Nov 25 10:25:00 crc kubenswrapper[4760]: I1125 10:25:00.949997 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94ef4f5-688e-40fa-81f0-bda19b5fdda7" path="/var/lib/kubelet/pods/b94ef4f5-688e-40fa-81f0-bda19b5fdda7/volumes" Nov 25 10:25:01 crc kubenswrapper[4760]: I1125 10:25:01.437082 4760 scope.go:117] "RemoveContainer" containerID="246a640a33702576b01a546cf4d6da1b5f709db051592a740f7388293cfdb945" Nov 25 10:25:01 crc kubenswrapper[4760]: I1125 10:25:01.437132 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/crc-debug-bcbsg" Nov 25 10:25:07 crc kubenswrapper[4760]: E1125 10:25:07.938563 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:26:00 crc kubenswrapper[4760]: I1125 10:26:00.729622 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_5fd9b990-91a9-4529-a951-15647544f5ec/ansibletest-ansibletest/0.log" Nov 25 10:26:00 crc kubenswrapper[4760]: I1125 10:26:00.924413 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d84fc8b6b-jxtfg_d99a8e14-f31b-45d8-8e74-8ace724974ad/barbican-api/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.001617 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6d84fc8b6b-jxtfg_d99a8e14-f31b-45d8-8e74-8ace724974ad/barbican-api-log/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.104673 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6b6b98f4-9l69x_5d7c9636-175f-4d7e-b3c7-86586c9a8734/barbican-keystone-listener/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.469829 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d9875665c-r8sg4_2b1b4f65-ed06-4d6d-9e74-b27255748225/barbican-worker/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.537404 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5d9875665c-r8sg4_2b1b4f65-ed06-4d6d-9e74-b27255748225/barbican-worker-log/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.625781 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6b6b98f4-9l69x_5d7c9636-175f-4d7e-b3c7-86586c9a8734/barbican-keystone-listener-log/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.690820 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j8trh_e324f737-7225-41ec-b3c5-6cc0c2931377/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.745834 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.745891 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.921020 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/ceilometer-notification-agent/0.log" Nov 25 10:26:01 crc kubenswrapper[4760]: I1125 10:26:01.927573 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/ceilometer-central-agent/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.024075 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/sg-core/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.031433 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4a55ce36-9d78-4311-a68e-507467c7a1ec/proxy-httpd/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.219483 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-kwrfb_5d87e41c-e89d-4b52-83b7-79d77bee80d9/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.338653 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-82kfb_60d03216-7d4d-433d-9e84-7b6a6b399a5f/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.559885 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0a8e435-6d04-48d6-b723-252b8358b055/cinder-api-log/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.574933 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c0a8e435-6d04-48d6-b723-252b8358b055/cinder-api/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.856596 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_09dd7945-dda4-4682-b55e-44569ec2bc78/probe/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.938700 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4e64f72-cbdd-44dc-9c1f-21b88eae9288/cinder-scheduler/0.log" Nov 25 10:26:02 crc kubenswrapper[4760]: I1125 10:26:02.941548 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_09dd7945-dda4-4682-b55e-44569ec2bc78/cinder-backup/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.068860 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f4e64f72-cbdd-44dc-9c1f-21b88eae9288/probe/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.261958 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f4f729ff-1806-4032-922b-2a47e4a9d7ff/probe/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.272660 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f4f729ff-1806-4032-922b-2a47e4a9d7ff/cinder-volume/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.378873 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ztrvd_ed298743-8f13-44a6-bbff-1b5702a1a0f5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.516106 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-824fv_bbb80fb1-9cd8-4326-9db9-88edd50fc0d4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.603166 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6885d49d55-9mqqw_1b305350-e74d-4e9a-8af0-14e88ddfccc0/init/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.764765 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6885d49d55-9mqqw_1b305350-e74d-4e9a-8af0-14e88ddfccc0/init/0.log" Nov 25 10:26:03 crc kubenswrapper[4760]: I1125 10:26:03.844066 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a3c90ae6-873c-4a00-84a0-a9a60fcc7c74/glance-httpd/0.log" Nov 25 10:26:04 crc kubenswrapper[4760]: I1125 10:26:04.008151 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a3c90ae6-873c-4a00-84a0-a9a60fcc7c74/glance-log/0.log" Nov 25 10:26:04 crc kubenswrapper[4760]: I1125 10:26:04.008821 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6885d49d55-9mqqw_1b305350-e74d-4e9a-8af0-14e88ddfccc0/dnsmasq-dns/0.log" Nov 25 10:26:04 crc kubenswrapper[4760]: I1125 10:26:04.160375 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cad7cc0f-3821-44ee-8b39-71988664ee4e/glance-httpd/0.log" Nov 25 10:26:04 crc kubenswrapper[4760]: I1125 10:26:04.214493 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cad7cc0f-3821-44ee-8b39-71988664ee4e/glance-log/0.log" Nov 25 10:26:04 crc kubenswrapper[4760]: I1125 10:26:04.575462 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_aa57ea6c-4740-4010-a3d6-a0e070615d40/horizontest-tests-horizontest/0.log" Nov 25 10:26:04 crc kubenswrapper[4760]: I1125 10:26:04.578496 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6655684d54-8jfvz_0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc/horizon/0.log" Nov 25 10:26:04 crc kubenswrapper[4760]: I1125 10:26:04.901684 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mjz2j_be1883ad-ca79-4bec-89f9-9b783c5047df/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:05 crc kubenswrapper[4760]: I1125 10:26:05.111466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ggjjn_e3e21edb-5737-49cd-bc9c-407e5f7f5445/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:05 crc kubenswrapper[4760]: I1125 10:26:05.283877 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401021-hxq6l_54e54192-6eff-4b00-a1f6-f9290cb87eca/keystone-cron/0.log" Nov 25 10:26:05 crc kubenswrapper[4760]: I1125 10:26:05.488423 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401081-7bnrg_d796b091-56b6-4f51-95f8-a4f01db5d9a6/keystone-cron/0.log" Nov 25 10:26:05 crc kubenswrapper[4760]: I1125 10:26:05.663473 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6655684d54-8jfvz_0bbd9fea-6104-467c-8ce2-6f9be5ff8bfc/horizon-log/0.log" Nov 25 10:26:05 crc kubenswrapper[4760]: I1125 10:26:05.667608 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd20932f-cb28-4343-98df-425123f7c87f/kube-state-metrics/3.log" Nov 25 10:26:05 crc kubenswrapper[4760]: I1125 10:26:05.679344 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd20932f-cb28-4343-98df-425123f7c87f/kube-state-metrics/2.log" Nov 25 10:26:05 crc kubenswrapper[4760]: I1125 10:26:05.892062 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-v4nvs_2d913348-cf44-4539-b090-181ea0720a33/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:06 crc kubenswrapper[4760]: I1125 10:26:06.060200 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0cc0b6e2-9204-474d-842c-c488ff0811a4/manila-api-log/0.log" Nov 25 10:26:06 crc kubenswrapper[4760]: I1125 10:26:06.209090 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_0cc0b6e2-9204-474d-842c-c488ff0811a4/manila-api/0.log" Nov 25 10:26:06 crc kubenswrapper[4760]: I1125 10:26:06.284917 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_f5b0fe2e-7460-4e1d-85f9-5cccfba89817/probe/0.log" Nov 25 10:26:06 crc kubenswrapper[4760]: I1125 10:26:06.337632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_f5b0fe2e-7460-4e1d-85f9-5cccfba89817/manila-scheduler/0.log" Nov 25 10:26:06 crc kubenswrapper[4760]: I1125 10:26:06.510078 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_4424df0c-a7e7-4880-aeb3-e8beaaa57b80/probe/0.log" Nov 25 10:26:06 crc kubenswrapper[4760]: I1125 10:26:06.555418 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_4424df0c-a7e7-4880-aeb3-e8beaaa57b80/manila-share/0.log" Nov 25 10:26:07 crc kubenswrapper[4760]: I1125 10:26:07.252616 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pp827_01b4af7c-f553-48d7-9166-856497bbe664/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:08 crc kubenswrapper[4760]: I1125 10:26:08.003802 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-564c475cd5-6wg66_9937626b-b050-469f-9e47-78785cfb5c15/neutron-httpd/0.log" Nov 25 10:26:08 crc kubenswrapper[4760]: I1125 10:26:08.278219 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69cbccbbcc-v8kx4_66326df4-af7d-474c-b63f-eee554099e1c/keystone-api/0.log" Nov 25 10:26:08 crc kubenswrapper[4760]: I1125 10:26:08.886179 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-564c475cd5-6wg66_9937626b-b050-469f-9e47-78785cfb5c15/neutron-api/0.log" Nov 25 10:26:09 crc kubenswrapper[4760]: I1125 10:26:09.426347 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8e3cadcf-b35a-4f88-9f0a-684f735164a0/nova-cell0-conductor-conductor/0.log" Nov 25 10:26:09 crc kubenswrapper[4760]: I1125 10:26:09.604922 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_db562c11-b116-4a44-9506-ef67f5211979/nova-cell1-conductor-conductor/0.log" Nov 25 10:26:10 crc kubenswrapper[4760]: I1125 10:26:10.118332 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_012fc757-399f-4a14-9ef8-332e3c34f53a/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 10:26:10 crc kubenswrapper[4760]: I1125 10:26:10.189692 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7rlpp_515be97b-ca6d-43a0-b8a1-471a782240bc/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:10 crc kubenswrapper[4760]: I1125 10:26:10.538703 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf7e8b89-ff82-471f-9255-d3268551c726/nova-metadata-log/0.log" Nov 25 10:26:11 crc kubenswrapper[4760]: I1125 10:26:11.847164 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b4921858-b22b-474b-b8fb-6ccbd97bffac/nova-scheduler-scheduler/0.log" Nov 25 10:26:11 crc kubenswrapper[4760]: I1125 10:26:11.950192 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_32c2adbb-f391-45e9-b20b-db6f61f927eb/nova-api-log/0.log" Nov 25 10:26:12 crc kubenswrapper[4760]: I1125 10:26:12.172331 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_17455e1c-2662-421d-ac93-ce773e1fd50a/mysql-bootstrap/0.log" Nov 25 10:26:12 crc kubenswrapper[4760]: I1125 10:26:12.342633 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_17455e1c-2662-421d-ac93-ce773e1fd50a/mysql-bootstrap/0.log" Nov 25 10:26:12 crc kubenswrapper[4760]: I1125 10:26:12.390053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_17455e1c-2662-421d-ac93-ce773e1fd50a/galera/0.log" Nov 25 10:26:12 crc kubenswrapper[4760]: I1125 10:26:12.664940 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_de9d3301-bdad-46bf-b7c2-4467cfd590dd/mysql-bootstrap/0.log" Nov 25 10:26:12 crc kubenswrapper[4760]: I1125 10:26:12.837388 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_de9d3301-bdad-46bf-b7c2-4467cfd590dd/mysql-bootstrap/0.log" Nov 25 10:26:12 crc kubenswrapper[4760]: I1125 10:26:12.929792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_de9d3301-bdad-46bf-b7c2-4467cfd590dd/galera/0.log" Nov 25 10:26:13 crc kubenswrapper[4760]: I1125 10:26:13.136317 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9df819bd-2ca5-4dd0-9409-e8d6e9a80b93/openstackclient/0.log" Nov 25 10:26:13 crc kubenswrapper[4760]: I1125 10:26:13.219325 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_32c2adbb-f391-45e9-b20b-db6f61f927eb/nova-api-api/0.log" Nov 25 10:26:13 crc kubenswrapper[4760]: I1125 10:26:13.345912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fgpnw_68c768c5-3e1e-41a8-af21-c886ea5959a3/openstack-network-exporter/0.log" Nov 25 10:26:13 crc kubenswrapper[4760]: I1125 10:26:13.534284 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovsdb-server-init/0.log" Nov 25 10:26:13 crc kubenswrapper[4760]: I1125 10:26:13.737047 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovsdb-server-init/0.log" Nov 25 10:26:13 crc kubenswrapper[4760]: I1125 10:26:13.782075 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovs-vswitchd/0.log" Nov 25 10:26:13 crc kubenswrapper[4760]: I1125 10:26:13.791608 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kf25c_d1ba8a40-f479-46dc-b509-a9c4d9c4670b/ovsdb-server/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.043444 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wtp5g_7b050dee-2005-4a2b-8550-6f5d055a86b6/ovn-controller/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.263373 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-kjm4v_eaf0aab3-fbd3-4389-ab45-8bd1c834f48f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.285011 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22e32299-69a7-4572-8ff1-1d2d409d5137/openstack-network-exporter/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.500382 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22e32299-69a7-4572-8ff1-1d2d409d5137/ovn-northd/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.517015 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_281d5fd5-dd87-4463-be57-4fd409cf4009/openstack-network-exporter/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.683405 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_281d5fd5-dd87-4463-be57-4fd409cf4009/ovsdbserver-nb/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.741478 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c1645e51-365a-4195-bb42-5641959bf77f/openstack-network-exporter/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.956005 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c1645e51-365a-4195-bb42-5641959bf77f/ovsdbserver-sb/0.log" Nov 25 10:26:14 crc kubenswrapper[4760]: I1125 10:26:14.957549 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cf7e8b89-ff82-471f-9255-d3268551c726/nova-metadata-metadata/0.log" Nov 25 10:26:15 crc kubenswrapper[4760]: I1125 10:26:15.525462 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-598d8454cd-s4vpx_5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4/placement-api/0.log" Nov 25 10:26:15 crc kubenswrapper[4760]: I1125 10:26:15.843523 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54c05cca-ddf1-4567-b30b-f770bd6b6704/setup-container/0.log" Nov 25 10:26:15 crc kubenswrapper[4760]: I1125 10:26:15.949759 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54c05cca-ddf1-4567-b30b-f770bd6b6704/setup-container/0.log" Nov 25 10:26:16 crc kubenswrapper[4760]: I1125 10:26:16.099883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_54c05cca-ddf1-4567-b30b-f770bd6b6704/rabbitmq/0.log" Nov 25 10:26:16 crc kubenswrapper[4760]: I1125 10:26:16.226204 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ac940436-7641-4872-8ab1-f6e0aca87e80/setup-container/0.log" Nov 25 10:26:16 crc kubenswrapper[4760]: I1125 10:26:16.386449 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ac940436-7641-4872-8ab1-f6e0aca87e80/setup-container/0.log" Nov 25 10:26:16 crc kubenswrapper[4760]: I1125 10:26:16.414670 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ac940436-7641-4872-8ab1-f6e0aca87e80/rabbitmq/0.log" Nov 25 10:26:16 crc kubenswrapper[4760]: I1125 10:26:16.491074 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-598d8454cd-s4vpx_5b2f24bc-7408-4e18-bc7d-eab6b3f8b2b4/placement-log/0.log" Nov 25 10:26:16 crc kubenswrapper[4760]: I1125 10:26:16.667125 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-gxmqp_375f35df-5fe0-4456-9d10-649e72a962a7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:16 crc kubenswrapper[4760]: I1125 10:26:16.747751 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qmrt5_5606daaf-d5b9-4ed2-a9aa-5e715141d4e4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:17 crc kubenswrapper[4760]: I1125 10:26:17.023226 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-n86sh_907a9527-c37d-4e36-9a7e-35066c230b6d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:17 crc kubenswrapper[4760]: I1125 10:26:17.200666 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jsv2p_6f68ee3f-7d13-433a-bc6b-504e98ff7b1d/ssh-known-hosts-edpm-deployment/0.log" Nov 25 10:26:17 crc kubenswrapper[4760]: I1125 10:26:17.519064 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_a546f694-04d6-4212-b53a-142420418b97/tempest-tests-tempest-tests-runner/0.log" Nov 25 10:26:17 crc kubenswrapper[4760]: I1125 10:26:17.556033 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_7e76e3b1-69e6-4498-b2f9-a52fdfe1650e/tempest-tests-tempest-tests-runner/0.log" Nov 25 10:26:17 crc kubenswrapper[4760]: I1125 10:26:17.595914 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_3d62b634-2cf7-42e7-b5d4-3791056b146a/test-operator-logs-container/0.log" Nov 25 10:26:17 crc kubenswrapper[4760]: I1125 10:26:17.793203 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_9b073dce-d4e1-4018-bfe6-f0a54597f116/test-operator-logs-container/0.log" Nov 25 10:26:17 crc kubenswrapper[4760]: I1125 10:26:17.869799 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9d79e9ee-084d-41e7-9513-aaea8863e85d/test-operator-logs-container/0.log" Nov 25 10:26:18 crc kubenswrapper[4760]: I1125 10:26:18.104972 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_c1a8f236-1676-4e0e-9395-8500fda5eba2/test-operator-logs-container/0.log" Nov 25 10:26:18 crc kubenswrapper[4760]: I1125 10:26:18.224618 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_5a899175-c606-4361-8300-3c2ed82d823c/tobiko-tests-tobiko/0.log" Nov 25 10:26:18 crc kubenswrapper[4760]: I1125 10:26:18.478463 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_8c968840-fcc2-4c11-baed-7477dfe970d2/tobiko-tests-tobiko/0.log" Nov 25 10:26:18 crc kubenswrapper[4760]: I1125 10:26:18.522836 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-w7rxr_fd5f7e13-b05e-4843-930f-62a3bf6e7ddc/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 10:26:31 crc kubenswrapper[4760]: I1125 10:26:31.745891 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:26:31 crc kubenswrapper[4760]: I1125 10:26:31.746459 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:26:31 crc kubenswrapper[4760]: E1125 10:26:31.939279 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:26:33 crc kubenswrapper[4760]: I1125 10:26:33.970332 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f1b32df7-1040-4d21-89cd-d5f772bd4014/memcached/0.log" Nov 25 10:26:45 crc kubenswrapper[4760]: I1125 10:26:45.838572 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-hlbbf_97e97ce2-b50b-478e-acb2-cbdd5232d67c/kube-rbac-proxy/0.log" Nov 25 10:26:45 crc kubenswrapper[4760]: I1125 10:26:45.865645 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-hlbbf_97e97ce2-b50b-478e-acb2-cbdd5232d67c/manager/2.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.066537 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/util/0.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.072693 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-hlbbf_97e97ce2-b50b-478e-acb2-cbdd5232d67c/manager/1.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.260298 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/pull/0.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.307510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/pull/0.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.320044 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/util/0.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.558771 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/util/0.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.559308 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/extract/0.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.559555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fhsxkd_929428c3-d839-4852-af22-badfb25ecbe5/pull/0.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.779273 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k4dk2_03a9ee81-2733-444d-8edc-ddb1303b5686/manager/2.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.836621 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k4dk2_03a9ee81-2733-444d-8edc-ddb1303b5686/manager/1.log" Nov 25 10:26:46 crc kubenswrapper[4760]: I1125 10:26:46.837195 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-k4dk2_03a9ee81-2733-444d-8edc-ddb1303b5686/kube-rbac-proxy/0.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.053994 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-xghfv_f531ae0e-78ad-4d2c-951f-0d1f7d1c8129/kube-rbac-proxy/0.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.054443 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-xghfv_f531ae0e-78ad-4d2c-951f-0d1f7d1c8129/manager/2.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.108668 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-xghfv_f531ae0e-78ad-4d2c-951f-0d1f7d1c8129/manager/1.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.318088 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-6cjlz_25f372bf-e250-492b-abb9-680b1efdbdec/kube-rbac-proxy/0.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.319864 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-6cjlz_25f372bf-e250-492b-abb9-680b1efdbdec/manager/2.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.386162 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-6cjlz_25f372bf-e250-492b-abb9-680b1efdbdec/manager/1.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.529838 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l24ns_b4325bd6-c276-4fbc-bc67-cf5a026c3537/manager/2.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.534854 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l24ns_b4325bd6-c276-4fbc-bc67-cf5a026c3537/kube-rbac-proxy/0.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.595726 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l24ns_b4325bd6-c276-4fbc-bc67-cf5a026c3537/manager/1.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.718775 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-l28cr_890067e5-2be8-4699-8d90-f2771ef453e5/manager/2.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.720777 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-l28cr_890067e5-2be8-4699-8d90-f2771ef453e5/kube-rbac-proxy/0.log" Nov 25 10:26:47 crc kubenswrapper[4760]: I1125 10:26:47.964045 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-l28cr_890067e5-2be8-4699-8d90-f2771ef453e5/manager/1.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.115949 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-njfjf_33faed21-8b19-4064-a6e2-5064ce8cbab2/manager/2.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.118373 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-njfjf_33faed21-8b19-4064-a6e2-5064ce8cbab2/kube-rbac-proxy/0.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.168809 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-njfjf_33faed21-8b19-4064-a6e2-5064ce8cbab2/manager/1.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.346798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-x7r44_6dde35ac-ff01-4e46-9eae-234e6abc37dc/manager/2.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.351119 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-x7r44_6dde35ac-ff01-4e46-9eae-234e6abc37dc/kube-rbac-proxy/0.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.439663 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-x7r44_6dde35ac-ff01-4e46-9eae-234e6abc37dc/manager/1.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.641628 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-kw54v_1d556614-e3c1-4834-919a-0c6f5f5cc4de/manager/3.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.665191 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-kw54v_1d556614-e3c1-4834-919a-0c6f5f5cc4de/kube-rbac-proxy/0.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.668174 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-kw54v_1d556614-e3c1-4834-919a-0c6f5f5cc4de/manager/2.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.867711 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-s4q64_f0f31412-34be-4b9d-8df1-b53d23abb1f6/kube-rbac-proxy/0.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.914701 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-s4q64_f0f31412-34be-4b9d-8df1-b53d23abb1f6/manager/2.log" Nov 25 10:26:48 crc kubenswrapper[4760]: I1125 10:26:48.973222 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-s4q64_f0f31412-34be-4b9d-8df1-b53d23abb1f6/manager/1.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.113057 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-54bpm_002e6b13-60c5-484c-8116-b4d5241ed678/manager/3.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.115621 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-54bpm_002e6b13-60c5-484c-8116-b4d5241ed678/kube-rbac-proxy/0.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.173617 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-54bpm_002e6b13-60c5-484c-8116-b4d5241ed678/manager/2.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.304554 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l7cv5_9291524e-d650-4366-b795-162d53bf2815/kube-rbac-proxy/0.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.330770 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l7cv5_9291524e-d650-4366-b795-162d53bf2815/manager/2.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.396452 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l7cv5_9291524e-d650-4366-b795-162d53bf2815/manager/1.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.546696 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-cxjcf_4e773e83-c06c-47e9-8a34-ef72472e3ae8/manager/3.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.565083 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-cxjcf_4e773e83-c06c-47e9-8a34-ef72472e3ae8/manager/2.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.633136 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-cxjcf_4e773e83-c06c-47e9-8a34-ef72472e3ae8/kube-rbac-proxy/0.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.739999 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-j5fsj_23471a89-c4fb-4e45-b7bb-2664e4ea99f3/kube-rbac-proxy/0.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.798206 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-j5fsj_23471a89-c4fb-4e45-b7bb-2664e4ea99f3/manager/3.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.830776 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-j5fsj_23471a89-c4fb-4e45-b7bb-2664e4ea99f3/manager/2.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.860576 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-c8gdx_59482a15-4638-4508-b60c-1c60c8df6d09/kube-rbac-proxy/0.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.945555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-c8gdx_59482a15-4638-4508-b60c-1c60c8df6d09/manager/1.log" Nov 25 10:26:49 crc kubenswrapper[4760]: I1125 10:26:49.997320 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-c8gdx_59482a15-4638-4508-b60c-1c60c8df6d09/manager/0.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.046989 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-wmmn4_c43ab37e-375d-4000-8313-9ea135250641/manager/3.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.134904 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-wmmn4_c43ab37e-375d-4000-8313-9ea135250641/manager/2.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.175044 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7759656c4c-n49xc_fe16fe4f-1740-4d43-a0d2-0d1d649c853c/operator/1.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.281135 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7759656c4c-n49xc_fe16fe4f-1740-4d43-a0d2-0d1d649c853c/operator/0.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.381018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wvv98_65361481-df4d-4010-a478-91fd2c50d9e6/kube-rbac-proxy/0.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.413131 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-w94z5_7e50fb1c-ead6-4358-a11b-66963b307f3a/registry-server/0.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.506491 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wvv98_65361481-df4d-4010-a478-91fd2c50d9e6/manager/2.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.593543 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wvv98_65361481-df4d-4010-a478-91fd2c50d9e6/manager/1.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.616195 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-w4gcn_6d9d0ad6-0976-4f14-81fb-f286f6768256/manager/2.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.628473 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-w4gcn_6d9d0ad6-0976-4f14-81fb-f286f6768256/kube-rbac-proxy/0.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.693896 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-w4gcn_6d9d0ad6-0976-4f14-81fb-f286f6768256/manager/1.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.805610 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5crqc_a9a9b42e-4d3b-495e-804e-af02af05581d/operator/3.log" Nov 25 10:26:50 crc kubenswrapper[4760]: I1125 10:26:50.972706 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5crqc_a9a9b42e-4d3b-495e-804e-af02af05581d/operator/2.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.054317 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pmw6n_8aea8bb6-720b-412a-acfc-f62366da5de5/kube-rbac-proxy/0.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.133675 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pmw6n_8aea8bb6-720b-412a-acfc-f62366da5de5/manager/2.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.147021 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pmw6n_8aea8bb6-720b-412a-acfc-f62366da5de5/manager/3.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.272011 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-plxrr_cef58941-ae6b-4624-af41-65ab598838eb/kube-rbac-proxy/0.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.284307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-plxrr_cef58941-ae6b-4624-af41-65ab598838eb/manager/3.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.381856 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-plxrr_cef58941-ae6b-4624-af41-65ab598838eb/manager/2.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.417596 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8566bc9698-5hw7j_042ed3e8-ea28-44f7-9859-2d0a1d5c3e17/kube-rbac-proxy/0.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.491394 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8566bc9698-5hw7j_042ed3e8-ea28-44f7-9859-2d0a1d5c3e17/manager/1.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.507987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8566bc9698-5hw7j_042ed3e8-ea28-44f7-9859-2d0a1d5c3e17/manager/0.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.603354 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-cr5ch_0f496ee1-ca51-427f-a51d-4fc214c7f50a/kube-rbac-proxy/0.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.657003 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-cr5ch_0f496ee1-ca51-427f-a51d-4fc214c7f50a/manager/2.log" Nov 25 10:26:51 crc kubenswrapper[4760]: I1125 10:26:51.704551 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-cr5ch_0f496ee1-ca51-427f-a51d-4fc214c7f50a/manager/1.log" Nov 25 10:27:01 crc kubenswrapper[4760]: I1125 10:27:01.745824 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:27:01 crc kubenswrapper[4760]: I1125 10:27:01.746382 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:27:01 crc kubenswrapper[4760]: I1125 10:27:01.746458 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 10:27:01 crc kubenswrapper[4760]: I1125 10:27:01.747083 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:27:01 crc kubenswrapper[4760]: I1125 10:27:01.747137 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" gracePeriod=600 Nov 25 10:27:01 crc kubenswrapper[4760]: E1125 10:27:01.889742 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:27:02 crc kubenswrapper[4760]: I1125 10:27:02.599963 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" exitCode=0 Nov 25 10:27:02 crc kubenswrapper[4760]: I1125 10:27:02.600052 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905"} Nov 25 10:27:02 crc kubenswrapper[4760]: I1125 10:27:02.600592 4760 scope.go:117] "RemoveContainer" containerID="780ed74759efa35b2aca3b56abb3e29894df1c2c3771dd97b1caa752192dc819" Nov 25 10:27:02 crc kubenswrapper[4760]: I1125 10:27:02.601728 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:27:02 crc kubenswrapper[4760]: E1125 10:27:02.602052 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:27:08 crc kubenswrapper[4760]: I1125 10:27:08.634142 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pf8bv_3acc0e9c-36be-4834-8450-d68aec396f24/control-plane-machine-set-operator/0.log" Nov 25 10:27:08 crc kubenswrapper[4760]: I1125 10:27:08.826856 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6w6bs_1ffafdad-e326-4d95-8733-e5b5b2197ad9/kube-rbac-proxy/0.log" Nov 25 10:27:08 crc kubenswrapper[4760]: I1125 10:27:08.849208 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6w6bs_1ffafdad-e326-4d95-8733-e5b5b2197ad9/machine-api-operator/0.log" Nov 25 10:27:15 crc kubenswrapper[4760]: I1125 10:27:15.938448 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:27:15 crc kubenswrapper[4760]: E1125 10:27:15.939395 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:27:22 crc kubenswrapper[4760]: I1125 10:27:22.175015 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-86mq8_a6f5c6ad-5f4b-442a-9041-7f053349a0e7/cert-manager-controller/0.log" Nov 25 10:27:22 crc kubenswrapper[4760]: I1125 10:27:22.257485 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-86mq8_a6f5c6ad-5f4b-442a-9041-7f053349a0e7/cert-manager-controller/1.log" Nov 25 10:27:22 crc kubenswrapper[4760]: I1125 10:27:22.382512 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-m6mjj_7498b2f4-5621-4e4d-8d34-d8fc09271dcf/cert-manager-cainjector/2.log" Nov 25 10:27:22 crc kubenswrapper[4760]: I1125 10:27:22.567763 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-m6mjj_7498b2f4-5621-4e4d-8d34-d8fc09271dcf/cert-manager-cainjector/1.log" Nov 25 10:27:22 crc kubenswrapper[4760]: I1125 10:27:22.640986 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-7849w_10171911-dbe6-4b07-a58e-07713d8112c2/cert-manager-webhook/0.log" Nov 25 10:27:29 crc kubenswrapper[4760]: I1125 10:27:29.939171 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:27:29 crc kubenswrapper[4760]: E1125 10:27:29.940304 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:27:36 crc kubenswrapper[4760]: I1125 10:27:36.186896 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-cj4rl_9ccfa2a7-8bcc-4e3f-8bf5-159248b7fe0b/nmstate-console-plugin/0.log" Nov 25 10:27:36 crc kubenswrapper[4760]: I1125 10:27:36.461612 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-c27qr_a7203aa8-a498-4242-9c79-3bcfb384707e/kube-rbac-proxy/0.log" Nov 25 10:27:36 crc kubenswrapper[4760]: I1125 10:27:36.463553 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ld6xj_adb17860-3ba6-4771-88db-d63cebf97628/nmstate-handler/0.log" Nov 25 10:27:36 crc kubenswrapper[4760]: I1125 10:27:36.542752 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-c27qr_a7203aa8-a498-4242-9c79-3bcfb384707e/nmstate-metrics/0.log" Nov 25 10:27:36 crc kubenswrapper[4760]: I1125 10:27:36.748295 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-cjvcc_08faa7c7-5fae-4dc8-9eb8-a83a6f7055ff/nmstate-operator/0.log" Nov 25 10:27:36 crc kubenswrapper[4760]: I1125 10:27:36.837026 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-p7b9n_133b40ac-61d0-4821-813d-a3f722f95293/nmstate-webhook/0.log" Nov 25 10:27:43 crc kubenswrapper[4760]: I1125 10:27:43.939303 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:27:43 crc kubenswrapper[4760]: E1125 10:27:43.940012 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:27:50 crc kubenswrapper[4760]: E1125 10:27:50.938189 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:27:52 crc kubenswrapper[4760]: I1125 10:27:52.506067 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-wdjm7_e911dae6-d9ed-40d3-802a-e536e5258829/kube-rbac-proxy/0.log" Nov 25 10:27:52 crc kubenswrapper[4760]: I1125 10:27:52.670328 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-wdjm7_e911dae6-d9ed-40d3-802a-e536e5258829/controller/0.log" Nov 25 10:27:52 crc kubenswrapper[4760]: I1125 10:27:52.878339 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.068005 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.102560 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.162433 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.175216 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.346429 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.369160 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.369796 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.391450 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.520763 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-reloader/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.557444 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-frr-files/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.615018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/cp-metrics/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.638761 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/controller/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.781690 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/frr-metrics/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.821851 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/kube-rbac-proxy/0.log" Nov 25 10:27:53 crc kubenswrapper[4760]: I1125 10:27:53.897778 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/kube-rbac-proxy-frr/0.log" Nov 25 10:27:54 crc kubenswrapper[4760]: I1125 10:27:54.062343 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/reloader/0.log" Nov 25 10:27:54 crc kubenswrapper[4760]: I1125 10:27:54.223959 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-fzx95_3531211f-bf66-45cb-9c5f-4a7aca2efbad/frr-k8s-webhook-server/0.log" Nov 25 10:27:54 crc kubenswrapper[4760]: I1125 10:27:54.357299 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76784bbdf-m7z64_394da4a0-f1c0-45c3-a31b-9cace1180c53/manager/3.log" Nov 25 10:27:54 crc kubenswrapper[4760]: I1125 10:27:54.464988 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76784bbdf-m7z64_394da4a0-f1c0-45c3-a31b-9cace1180c53/manager/2.log" Nov 25 10:27:54 crc kubenswrapper[4760]: I1125 10:27:54.633860 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-547776db9-454dl_0f1ca361-a3c2-45c2-86ef-a32c06fe6476/webhook-server/0.log" Nov 25 10:27:54 crc kubenswrapper[4760]: I1125 10:27:54.850327 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m2nhl_44dac91a-5352-4392-ab9b-49c59e38409f/kube-rbac-proxy/0.log" Nov 25 10:27:55 crc kubenswrapper[4760]: I1125 10:27:55.515500 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m2nhl_44dac91a-5352-4392-ab9b-49c59e38409f/speaker/0.log" Nov 25 10:27:56 crc kubenswrapper[4760]: I1125 10:27:56.025775 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pw649_6deb0467-1ded-4513-8aad-5a7b6c671895/frr/0.log" Nov 25 10:27:56 crc kubenswrapper[4760]: I1125 10:27:56.945144 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:27:56 crc kubenswrapper[4760]: E1125 10:27:56.945746 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.057593 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/util/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.211053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/util/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.242022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/pull/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.295909 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/pull/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.467196 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/pull/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.495689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/extract/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.589617 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egm7h8_0554a1c9-798a-47ca-a9c3-7b57e649ddeb/util/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.704819 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-utilities/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.883098 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-content/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.927737 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-utilities/0.log" Nov 25 10:28:09 crc kubenswrapper[4760]: I1125 10:28:09.941714 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-content/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.078342 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-utilities/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.078545 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/extract-content/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.283775 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-utilities/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.506790 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-utilities/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.511526 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-content/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.516645 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-content/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.755342 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-utilities/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.792792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bjblx_35a85cc6-c1dd-4791-a1c5-d6853d955877/registry-server/0.log" Nov 25 10:28:10 crc kubenswrapper[4760]: I1125 10:28:10.801862 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/extract-content/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.049944 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/util/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.289960 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/pull/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.350318 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/pull/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.374910 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/util/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.523801 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/util/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.595044 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/extract/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.602954 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zgb5w_2230ed24-958d-42e6-8c36-87e8b4cede69/pull/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.822482 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ttwnc_3d8e687e-f18e-4f36-aefc-59c644196614/registry-server/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.847909 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8s28s_613c9059-f285-4892-96c6-e27686513a0a/marketplace-operator/0.log" Nov 25 10:28:11 crc kubenswrapper[4760]: I1125 10:28:11.938964 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:28:11 crc kubenswrapper[4760]: E1125 10:28:11.939298 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.027960 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-utilities/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.237981 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-utilities/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.243437 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-content/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.261478 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-content/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.418884 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-utilities/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.419149 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/extract-content/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.650094 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-utilities/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.696701 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-x6k2l_41eb0ddf-5d08-46bc-b6d4-59f6f86369e6/registry-server/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.808360 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-utilities/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.829555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-content/0.log" Nov 25 10:28:12 crc kubenswrapper[4760]: I1125 10:28:12.865486 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-content/0.log" Nov 25 10:28:13 crc kubenswrapper[4760]: I1125 10:28:13.129803 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-utilities/0.log" Nov 25 10:28:13 crc kubenswrapper[4760]: I1125 10:28:13.186845 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/extract-content/0.log" Nov 25 10:28:14 crc kubenswrapper[4760]: I1125 10:28:14.171127 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8skdl_b7304e75-6f0d-481d-8fbc-5de0e061032d/registry-server/0.log" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.548366 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jvw7l"] Nov 25 10:28:22 crc kubenswrapper[4760]: E1125 10:28:22.549327 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94ef4f5-688e-40fa-81f0-bda19b5fdda7" containerName="container-00" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.549341 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94ef4f5-688e-40fa-81f0-bda19b5fdda7" containerName="container-00" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.549533 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94ef4f5-688e-40fa-81f0-bda19b5fdda7" containerName="container-00" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.550964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.585372 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvw7l"] Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.655718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrt5x\" (UniqueName: \"kubernetes.io/projected/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-kube-api-access-hrt5x\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.655790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-utilities\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.655907 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-catalog-content\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.758094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-utilities\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.758146 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-catalog-content\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.758352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrt5x\" (UniqueName: \"kubernetes.io/projected/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-kube-api-access-hrt5x\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.758720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-catalog-content\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.758736 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-utilities\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.779639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrt5x\" (UniqueName: \"kubernetes.io/projected/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-kube-api-access-hrt5x\") pod \"redhat-marketplace-jvw7l\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:22 crc kubenswrapper[4760]: I1125 10:28:22.891649 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:23 crc kubenswrapper[4760]: I1125 10:28:23.393896 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvw7l"] Nov 25 10:28:23 crc kubenswrapper[4760]: I1125 10:28:23.409406 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvw7l" event={"ID":"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5","Type":"ContainerStarted","Data":"8c305cca7d88e0267b0aba19ba4e188a7275355ff75d54a40f37166cfa34a453"} Nov 25 10:28:24 crc kubenswrapper[4760]: I1125 10:28:24.420278 4760 generic.go:334] "Generic (PLEG): container finished" podID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerID="58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d" exitCode=0 Nov 25 10:28:24 crc kubenswrapper[4760]: I1125 10:28:24.420373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvw7l" event={"ID":"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5","Type":"ContainerDied","Data":"58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d"} Nov 25 10:28:24 crc kubenswrapper[4760]: I1125 10:28:24.422563 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:28:25 crc kubenswrapper[4760]: I1125 10:28:25.939644 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:28:25 crc kubenswrapper[4760]: E1125 10:28:25.940318 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:28:26 crc kubenswrapper[4760]: I1125 10:28:26.446326 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvw7l" event={"ID":"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5","Type":"ContainerStarted","Data":"d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665"} Nov 25 10:28:27 crc kubenswrapper[4760]: I1125 10:28:27.455125 4760 generic.go:334] "Generic (PLEG): container finished" podID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerID="d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665" exitCode=0 Nov 25 10:28:27 crc kubenswrapper[4760]: I1125 10:28:27.455167 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvw7l" event={"ID":"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5","Type":"ContainerDied","Data":"d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665"} Nov 25 10:28:28 crc kubenswrapper[4760]: I1125 10:28:28.467417 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvw7l" event={"ID":"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5","Type":"ContainerStarted","Data":"40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b"} Nov 25 10:28:28 crc kubenswrapper[4760]: I1125 10:28:28.496134 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jvw7l" podStartSLOduration=2.803888286 podStartE2EDuration="6.49611469s" podCreationTimestamp="2025-11-25 10:28:22 +0000 UTC" firstStartedPulling="2025-11-25 10:28:24.422346723 +0000 UTC m=+8238.131377518" lastFinishedPulling="2025-11-25 10:28:28.114573127 +0000 UTC m=+8241.823603922" observedRunningTime="2025-11-25 10:28:28.488677399 +0000 UTC m=+8242.197708214" watchObservedRunningTime="2025-11-25 10:28:28.49611469 +0000 UTC m=+8242.205145485" Nov 25 10:28:32 crc kubenswrapper[4760]: I1125 10:28:32.892011 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:32 crc kubenswrapper[4760]: I1125 10:28:32.892424 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:32 crc kubenswrapper[4760]: I1125 10:28:32.952896 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:33 crc kubenswrapper[4760]: I1125 10:28:33.598686 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:33 crc kubenswrapper[4760]: I1125 10:28:33.664158 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvw7l"] Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.547905 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jvw7l" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="registry-server" containerID="cri-o://40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b" gracePeriod=2 Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.619292 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w5nng"] Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.621340 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.634941 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5nng"] Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.676131 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgbj\" (UniqueName: \"kubernetes.io/projected/5621e683-6e0f-4756-b26f-0cef0cce3b4f-kube-api-access-cwgbj\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.679040 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-utilities\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.679263 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-catalog-content\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.786550 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwgbj\" (UniqueName: \"kubernetes.io/projected/5621e683-6e0f-4756-b26f-0cef0cce3b4f-kube-api-access-cwgbj\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.787210 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-utilities\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.787366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-catalog-content\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.788173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-catalog-content\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.788707 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-utilities\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:35 crc kubenswrapper[4760]: I1125 10:28:35.834967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwgbj\" (UniqueName: \"kubernetes.io/projected/5621e683-6e0f-4756-b26f-0cef0cce3b4f-kube-api-access-cwgbj\") pod \"community-operators-w5nng\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.032728 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.057375 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.196998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-catalog-content\") pod \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.197043 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrt5x\" (UniqueName: \"kubernetes.io/projected/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-kube-api-access-hrt5x\") pod \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.197067 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-utilities\") pod \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\" (UID: \"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5\") " Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.199020 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-utilities" (OuterVolumeSpecName: "utilities") pod "841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" (UID: "841412e2-e7ff-4d64-b1e2-fe4832b7a3c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.208777 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-kube-api-access-hrt5x" (OuterVolumeSpecName: "kube-api-access-hrt5x") pod "841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" (UID: "841412e2-e7ff-4d64-b1e2-fe4832b7a3c5"). InnerVolumeSpecName "kube-api-access-hrt5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.219832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" (UID: "841412e2-e7ff-4d64-b1e2-fe4832b7a3c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.299097 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.299140 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrt5x\" (UniqueName: \"kubernetes.io/projected/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-kube-api-access-hrt5x\") on node \"crc\" DevicePath \"\"" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.299155 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.579108 4760 generic.go:334] "Generic (PLEG): container finished" podID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerID="40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b" exitCode=0 Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.579326 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvw7l" event={"ID":"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5","Type":"ContainerDied","Data":"40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b"} Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.579587 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvw7l" event={"ID":"841412e2-e7ff-4d64-b1e2-fe4832b7a3c5","Type":"ContainerDied","Data":"8c305cca7d88e0267b0aba19ba4e188a7275355ff75d54a40f37166cfa34a453"} Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.579614 4760 scope.go:117] "RemoveContainer" containerID="40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.579479 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvw7l" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.622280 4760 scope.go:117] "RemoveContainer" containerID="d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.639214 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvw7l"] Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.652543 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvw7l"] Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.672267 4760 scope.go:117] "RemoveContainer" containerID="58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.746875 4760 scope.go:117] "RemoveContainer" containerID="40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b" Nov 25 10:28:36 crc kubenswrapper[4760]: E1125 10:28:36.748734 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b\": container with ID starting with 40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b not found: ID does not exist" containerID="40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.748774 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b"} err="failed to get container status \"40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b\": rpc error: code = NotFound desc = could not find container \"40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b\": container with ID starting with 40ddb14c18ef5d4b206eb04059c848f01be8d50d16a24350e416616c542cf84b not found: ID does not exist" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.748806 4760 scope.go:117] "RemoveContainer" containerID="d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665" Nov 25 10:28:36 crc kubenswrapper[4760]: E1125 10:28:36.750520 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665\": container with ID starting with d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665 not found: ID does not exist" containerID="d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.750558 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665"} err="failed to get container status \"d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665\": rpc error: code = NotFound desc = could not find container \"d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665\": container with ID starting with d7b47a28c93e1df73a342d8e722ac5e39871b48268c0d6cafcc36c7f2303b665 not found: ID does not exist" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.750581 4760 scope.go:117] "RemoveContainer" containerID="58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d" Nov 25 10:28:36 crc kubenswrapper[4760]: E1125 10:28:36.750950 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d\": container with ID starting with 58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d not found: ID does not exist" containerID="58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.750982 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d"} err="failed to get container status \"58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d\": rpc error: code = NotFound desc = could not find container \"58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d\": container with ID starting with 58868966f24b34f1371fd2916158d41d2c04ae206d8e111bc5f8c9aeaa5bac0d not found: ID does not exist" Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.844089 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w5nng"] Nov 25 10:28:36 crc kubenswrapper[4760]: I1125 10:28:36.977619 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" path="/var/lib/kubelet/pods/841412e2-e7ff-4d64-b1e2-fe4832b7a3c5/volumes" Nov 25 10:28:37 crc kubenswrapper[4760]: I1125 10:28:37.597477 4760 generic.go:334] "Generic (PLEG): container finished" podID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerID="18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38" exitCode=0 Nov 25 10:28:37 crc kubenswrapper[4760]: I1125 10:28:37.597713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5nng" event={"ID":"5621e683-6e0f-4756-b26f-0cef0cce3b4f","Type":"ContainerDied","Data":"18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38"} Nov 25 10:28:37 crc kubenswrapper[4760]: I1125 10:28:37.597992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5nng" event={"ID":"5621e683-6e0f-4756-b26f-0cef0cce3b4f","Type":"ContainerStarted","Data":"528e6de2e34afb6f12df27cb05562948bc0b4c2f1b195e60c5debefd011ed928"} Nov 25 10:28:38 crc kubenswrapper[4760]: I1125 10:28:38.938269 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:28:38 crc kubenswrapper[4760]: E1125 10:28:38.939007 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:28:39 crc kubenswrapper[4760]: I1125 10:28:39.616729 4760 generic.go:334] "Generic (PLEG): container finished" podID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerID="f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4" exitCode=0 Nov 25 10:28:39 crc kubenswrapper[4760]: I1125 10:28:39.616784 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5nng" event={"ID":"5621e683-6e0f-4756-b26f-0cef0cce3b4f","Type":"ContainerDied","Data":"f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4"} Nov 25 10:28:43 crc kubenswrapper[4760]: E1125 10:28:43.296963 4760 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.21:56784->38.129.56.21:33427: read tcp 38.129.56.21:56784->38.129.56.21:33427: read: connection reset by peer Nov 25 10:28:43 crc kubenswrapper[4760]: I1125 10:28:43.654395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5nng" event={"ID":"5621e683-6e0f-4756-b26f-0cef0cce3b4f","Type":"ContainerStarted","Data":"f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498"} Nov 25 10:28:43 crc kubenswrapper[4760]: I1125 10:28:43.682237 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w5nng" podStartSLOduration=3.051403152 podStartE2EDuration="8.682195149s" podCreationTimestamp="2025-11-25 10:28:35 +0000 UTC" firstStartedPulling="2025-11-25 10:28:37.60130744 +0000 UTC m=+8251.310338235" lastFinishedPulling="2025-11-25 10:28:43.232099437 +0000 UTC m=+8256.941130232" observedRunningTime="2025-11-25 10:28:43.678523925 +0000 UTC m=+8257.387554730" watchObservedRunningTime="2025-11-25 10:28:43.682195149 +0000 UTC m=+8257.391225944" Nov 25 10:28:46 crc kubenswrapper[4760]: I1125 10:28:46.032755 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:46 crc kubenswrapper[4760]: I1125 10:28:46.033139 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:46 crc kubenswrapper[4760]: I1125 10:28:46.090595 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:51 crc kubenswrapper[4760]: I1125 10:28:51.938996 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:28:51 crc kubenswrapper[4760]: E1125 10:28:51.939889 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:28:56 crc kubenswrapper[4760]: I1125 10:28:56.121139 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:56 crc kubenswrapper[4760]: I1125 10:28:56.177119 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5nng"] Nov 25 10:28:56 crc kubenswrapper[4760]: I1125 10:28:56.788170 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w5nng" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="registry-server" containerID="cri-o://f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498" gracePeriod=2 Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.315794 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.414054 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-utilities\") pod \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.414232 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-catalog-content\") pod \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.414288 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwgbj\" (UniqueName: \"kubernetes.io/projected/5621e683-6e0f-4756-b26f-0cef0cce3b4f-kube-api-access-cwgbj\") pod \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\" (UID: \"5621e683-6e0f-4756-b26f-0cef0cce3b4f\") " Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.414998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-utilities" (OuterVolumeSpecName: "utilities") pod "5621e683-6e0f-4756-b26f-0cef0cce3b4f" (UID: "5621e683-6e0f-4756-b26f-0cef0cce3b4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.420515 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5621e683-6e0f-4756-b26f-0cef0cce3b4f-kube-api-access-cwgbj" (OuterVolumeSpecName: "kube-api-access-cwgbj") pod "5621e683-6e0f-4756-b26f-0cef0cce3b4f" (UID: "5621e683-6e0f-4756-b26f-0cef0cce3b4f"). InnerVolumeSpecName "kube-api-access-cwgbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.468626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5621e683-6e0f-4756-b26f-0cef0cce3b4f" (UID: "5621e683-6e0f-4756-b26f-0cef0cce3b4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.517413 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.517575 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwgbj\" (UniqueName: \"kubernetes.io/projected/5621e683-6e0f-4756-b26f-0cef0cce3b4f-kube-api-access-cwgbj\") on node \"crc\" DevicePath \"\"" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.517593 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5621e683-6e0f-4756-b26f-0cef0cce3b4f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.802585 4760 generic.go:334] "Generic (PLEG): container finished" podID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerID="f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498" exitCode=0 Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.802676 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w5nng" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.802669 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5nng" event={"ID":"5621e683-6e0f-4756-b26f-0cef0cce3b4f","Type":"ContainerDied","Data":"f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498"} Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.802904 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w5nng" event={"ID":"5621e683-6e0f-4756-b26f-0cef0cce3b4f","Type":"ContainerDied","Data":"528e6de2e34afb6f12df27cb05562948bc0b4c2f1b195e60c5debefd011ed928"} Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.802954 4760 scope.go:117] "RemoveContainer" containerID="f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.851521 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w5nng"] Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.857438 4760 scope.go:117] "RemoveContainer" containerID="f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.866093 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w5nng"] Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.886188 4760 scope.go:117] "RemoveContainer" containerID="18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.951053 4760 scope.go:117] "RemoveContainer" containerID="f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498" Nov 25 10:28:57 crc kubenswrapper[4760]: E1125 10:28:57.951935 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498\": container with ID starting with f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498 not found: ID does not exist" containerID="f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.951976 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498"} err="failed to get container status \"f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498\": rpc error: code = NotFound desc = could not find container \"f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498\": container with ID starting with f287651d276e6dd29137a0417e9b9a52b13807927b92fd3857f1e80d337ed498 not found: ID does not exist" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.952004 4760 scope.go:117] "RemoveContainer" containerID="f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4" Nov 25 10:28:57 crc kubenswrapper[4760]: E1125 10:28:57.952420 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4\": container with ID starting with f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4 not found: ID does not exist" containerID="f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.952450 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4"} err="failed to get container status \"f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4\": rpc error: code = NotFound desc = could not find container \"f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4\": container with ID starting with f51a6df13db1eab48457a7fb10eb5dcac08f65e3c435b32853e1c80c89487bf4 not found: ID does not exist" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.952466 4760 scope.go:117] "RemoveContainer" containerID="18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38" Nov 25 10:28:57 crc kubenswrapper[4760]: E1125 10:28:57.952971 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38\": container with ID starting with 18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38 not found: ID does not exist" containerID="18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38" Nov 25 10:28:57 crc kubenswrapper[4760]: I1125 10:28:57.952992 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38"} err="failed to get container status \"18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38\": rpc error: code = NotFound desc = could not find container \"18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38\": container with ID starting with 18e3cfab326b522ae7786c7e7877f209590ef133a7a8a44edc9fc7857c82fd38 not found: ID does not exist" Nov 25 10:28:58 crc kubenswrapper[4760]: I1125 10:28:58.955941 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" path="/var/lib/kubelet/pods/5621e683-6e0f-4756-b26f-0cef0cce3b4f/volumes" Nov 25 10:29:05 crc kubenswrapper[4760]: I1125 10:29:05.938527 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:29:05 crc kubenswrapper[4760]: E1125 10:29:05.939502 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:29:11 crc kubenswrapper[4760]: E1125 10:29:11.944987 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:29:18 crc kubenswrapper[4760]: I1125 10:29:18.942949 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:29:18 crc kubenswrapper[4760]: E1125 10:29:18.943663 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:29:33 crc kubenswrapper[4760]: I1125 10:29:33.939009 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:29:33 crc kubenswrapper[4760]: E1125 10:29:33.939636 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:29:48 crc kubenswrapper[4760]: I1125 10:29:48.939293 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:29:48 crc kubenswrapper[4760]: E1125 10:29:48.940212 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:29:59 crc kubenswrapper[4760]: I1125 10:29:59.938624 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:29:59 crc kubenswrapper[4760]: E1125 10:29:59.939472 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.165150 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47"] Nov 25 10:30:00 crc kubenswrapper[4760]: E1125 10:30:00.165680 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="extract-utilities" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.165707 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="extract-utilities" Nov 25 10:30:00 crc kubenswrapper[4760]: E1125 10:30:00.165798 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="registry-server" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.165808 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="registry-server" Nov 25 10:30:00 crc kubenswrapper[4760]: E1125 10:30:00.165823 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="extract-utilities" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.165831 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="extract-utilities" Nov 25 10:30:00 crc kubenswrapper[4760]: E1125 10:30:00.165854 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="registry-server" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.165862 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="registry-server" Nov 25 10:30:00 crc kubenswrapper[4760]: E1125 10:30:00.165881 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="extract-content" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.165888 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="extract-content" Nov 25 10:30:00 crc kubenswrapper[4760]: E1125 10:30:00.165912 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="extract-content" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.165919 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="extract-content" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.166170 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5621e683-6e0f-4756-b26f-0cef0cce3b4f" containerName="registry-server" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.166208 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="841412e2-e7ff-4d64-b1e2-fe4832b7a3c5" containerName="registry-server" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.168072 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.171360 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.171623 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.182334 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47"] Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.270891 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab91276f-db4b-47ea-8067-86b233d7bc81-secret-volume\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.271276 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab91276f-db4b-47ea-8067-86b233d7bc81-config-volume\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.271388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95zfd\" (UniqueName: \"kubernetes.io/projected/ab91276f-db4b-47ea-8067-86b233d7bc81-kube-api-access-95zfd\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.373019 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab91276f-db4b-47ea-8067-86b233d7bc81-secret-volume\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.373103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab91276f-db4b-47ea-8067-86b233d7bc81-config-volume\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.373194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95zfd\" (UniqueName: \"kubernetes.io/projected/ab91276f-db4b-47ea-8067-86b233d7bc81-kube-api-access-95zfd\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.374155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab91276f-db4b-47ea-8067-86b233d7bc81-config-volume\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.382564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab91276f-db4b-47ea-8067-86b233d7bc81-secret-volume\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.391725 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95zfd\" (UniqueName: \"kubernetes.io/projected/ab91276f-db4b-47ea-8067-86b233d7bc81-kube-api-access-95zfd\") pod \"collect-profiles-29401110-x4s47\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.500615 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:00 crc kubenswrapper[4760]: I1125 10:30:00.992176 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47"] Nov 25 10:30:01 crc kubenswrapper[4760]: I1125 10:30:01.518982 4760 generic.go:334] "Generic (PLEG): container finished" podID="ab91276f-db4b-47ea-8067-86b233d7bc81" containerID="b8000479f6dc756bf8522650835e0e832e3c08dbaabc6eefb3e9e0749784a469" exitCode=0 Nov 25 10:30:01 crc kubenswrapper[4760]: I1125 10:30:01.519049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" event={"ID":"ab91276f-db4b-47ea-8067-86b233d7bc81","Type":"ContainerDied","Data":"b8000479f6dc756bf8522650835e0e832e3c08dbaabc6eefb3e9e0749784a469"} Nov 25 10:30:01 crc kubenswrapper[4760]: I1125 10:30:01.519224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" event={"ID":"ab91276f-db4b-47ea-8067-86b233d7bc81","Type":"ContainerStarted","Data":"b8c187fede7c11b743acde3d04f629ac0bef24c47f59c64de5733e04fbcdbb40"} Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.861762 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.932121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95zfd\" (UniqueName: \"kubernetes.io/projected/ab91276f-db4b-47ea-8067-86b233d7bc81-kube-api-access-95zfd\") pod \"ab91276f-db4b-47ea-8067-86b233d7bc81\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.932208 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab91276f-db4b-47ea-8067-86b233d7bc81-secret-volume\") pod \"ab91276f-db4b-47ea-8067-86b233d7bc81\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.932433 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab91276f-db4b-47ea-8067-86b233d7bc81-config-volume\") pod \"ab91276f-db4b-47ea-8067-86b233d7bc81\" (UID: \"ab91276f-db4b-47ea-8067-86b233d7bc81\") " Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.932978 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab91276f-db4b-47ea-8067-86b233d7bc81-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab91276f-db4b-47ea-8067-86b233d7bc81" (UID: "ab91276f-db4b-47ea-8067-86b233d7bc81"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.933374 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab91276f-db4b-47ea-8067-86b233d7bc81-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.938789 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab91276f-db4b-47ea-8067-86b233d7bc81-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab91276f-db4b-47ea-8067-86b233d7bc81" (UID: "ab91276f-db4b-47ea-8067-86b233d7bc81"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 10:30:02 crc kubenswrapper[4760]: I1125 10:30:02.939043 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab91276f-db4b-47ea-8067-86b233d7bc81-kube-api-access-95zfd" (OuterVolumeSpecName: "kube-api-access-95zfd") pod "ab91276f-db4b-47ea-8067-86b233d7bc81" (UID: "ab91276f-db4b-47ea-8067-86b233d7bc81"). InnerVolumeSpecName "kube-api-access-95zfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:30:03 crc kubenswrapper[4760]: I1125 10:30:03.035812 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95zfd\" (UniqueName: \"kubernetes.io/projected/ab91276f-db4b-47ea-8067-86b233d7bc81-kube-api-access-95zfd\") on node \"crc\" DevicePath \"\"" Nov 25 10:30:03 crc kubenswrapper[4760]: I1125 10:30:03.036135 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab91276f-db4b-47ea-8067-86b233d7bc81-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 10:30:03 crc kubenswrapper[4760]: I1125 10:30:03.538817 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" Nov 25 10:30:03 crc kubenswrapper[4760]: I1125 10:30:03.548915 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401110-x4s47" event={"ID":"ab91276f-db4b-47ea-8067-86b233d7bc81","Type":"ContainerDied","Data":"b8c187fede7c11b743acde3d04f629ac0bef24c47f59c64de5733e04fbcdbb40"} Nov 25 10:30:03 crc kubenswrapper[4760]: I1125 10:30:03.549007 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c187fede7c11b743acde3d04f629ac0bef24c47f59c64de5733e04fbcdbb40" Nov 25 10:30:03 crc kubenswrapper[4760]: I1125 10:30:03.931955 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz"] Nov 25 10:30:03 crc kubenswrapper[4760]: I1125 10:30:03.940266 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-blvdz"] Nov 25 10:30:04 crc kubenswrapper[4760]: I1125 10:30:04.950257 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df4b1db-3f56-44f6-9e36-121c251339f1" path="/var/lib/kubelet/pods/5df4b1db-3f56-44f6-9e36-121c251339f1/volumes" Nov 25 10:30:10 crc kubenswrapper[4760]: I1125 10:30:10.938792 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:30:10 crc kubenswrapper[4760]: E1125 10:30:10.939743 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:30:24 crc kubenswrapper[4760]: I1125 10:30:24.939512 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:30:24 crc kubenswrapper[4760]: E1125 10:30:24.940637 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:30:24 crc kubenswrapper[4760]: E1125 10:30:24.939564 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:30:34 crc kubenswrapper[4760]: I1125 10:30:34.897859 4760 generic.go:334] "Generic (PLEG): container finished" podID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerID="276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e" exitCode=0 Nov 25 10:30:34 crc kubenswrapper[4760]: I1125 10:30:34.897969 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5lmg8/must-gather-p6glf" event={"ID":"476479c3-79d3-4f4a-92c6-95e623dddb3d","Type":"ContainerDied","Data":"276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e"} Nov 25 10:30:34 crc kubenswrapper[4760]: I1125 10:30:34.899168 4760 scope.go:117] "RemoveContainer" containerID="276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e" Nov 25 10:30:35 crc kubenswrapper[4760]: I1125 10:30:35.205453 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5lmg8_must-gather-p6glf_476479c3-79d3-4f4a-92c6-95e623dddb3d/gather/0.log" Nov 25 10:30:37 crc kubenswrapper[4760]: I1125 10:30:37.349979 4760 scope.go:117] "RemoveContainer" containerID="d917512a8db434e7f7f0b18a7a41e1d02eccf50854b54f3ee2e4e9307802be51" Nov 25 10:30:37 crc kubenswrapper[4760]: I1125 10:30:37.403693 4760 scope.go:117] "RemoveContainer" containerID="8499f4b50983c42d806f726998cb5b93dd50dfa16af9da5742b87b91b260a362" Nov 25 10:30:39 crc kubenswrapper[4760]: I1125 10:30:39.938393 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:30:39 crc kubenswrapper[4760]: E1125 10:30:39.938936 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.095260 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5lmg8/must-gather-p6glf"] Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.096076 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5lmg8/must-gather-p6glf" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerName="copy" containerID="cri-o://51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5" gracePeriod=2 Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.106336 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5lmg8/must-gather-p6glf"] Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.524569 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5lmg8_must-gather-p6glf_476479c3-79d3-4f4a-92c6-95e623dddb3d/copy/0.log" Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.525019 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.585951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/476479c3-79d3-4f4a-92c6-95e623dddb3d-must-gather-output\") pod \"476479c3-79d3-4f4a-92c6-95e623dddb3d\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.586027 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pscr\" (UniqueName: \"kubernetes.io/projected/476479c3-79d3-4f4a-92c6-95e623dddb3d-kube-api-access-8pscr\") pod \"476479c3-79d3-4f4a-92c6-95e623dddb3d\" (UID: \"476479c3-79d3-4f4a-92c6-95e623dddb3d\") " Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.595530 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476479c3-79d3-4f4a-92c6-95e623dddb3d-kube-api-access-8pscr" (OuterVolumeSpecName: "kube-api-access-8pscr") pod "476479c3-79d3-4f4a-92c6-95e623dddb3d" (UID: "476479c3-79d3-4f4a-92c6-95e623dddb3d"). InnerVolumeSpecName "kube-api-access-8pscr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.688483 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pscr\" (UniqueName: \"kubernetes.io/projected/476479c3-79d3-4f4a-92c6-95e623dddb3d-kube-api-access-8pscr\") on node \"crc\" DevicePath \"\"" Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.777841 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476479c3-79d3-4f4a-92c6-95e623dddb3d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "476479c3-79d3-4f4a-92c6-95e623dddb3d" (UID: "476479c3-79d3-4f4a-92c6-95e623dddb3d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.790719 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/476479c3-79d3-4f4a-92c6-95e623dddb3d-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 10:30:48 crc kubenswrapper[4760]: I1125 10:30:48.949300 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" path="/var/lib/kubelet/pods/476479c3-79d3-4f4a-92c6-95e623dddb3d/volumes" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.039280 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5lmg8_must-gather-p6glf_476479c3-79d3-4f4a-92c6-95e623dddb3d/copy/0.log" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.039930 4760 generic.go:334] "Generic (PLEG): container finished" podID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerID="51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5" exitCode=143 Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.039989 4760 scope.go:117] "RemoveContainer" containerID="51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.040039 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5lmg8/must-gather-p6glf" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.068395 4760 scope.go:117] "RemoveContainer" containerID="276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.154421 4760 scope.go:117] "RemoveContainer" containerID="51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5" Nov 25 10:30:49 crc kubenswrapper[4760]: E1125 10:30:49.155758 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5\": container with ID starting with 51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5 not found: ID does not exist" containerID="51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.155807 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5"} err="failed to get container status \"51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5\": rpc error: code = NotFound desc = could not find container \"51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5\": container with ID starting with 51590907b6400fac1d6899ca04f26e171756dc7cdbcac1d0b6ceffb1c8d931f5 not found: ID does not exist" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.155835 4760 scope.go:117] "RemoveContainer" containerID="276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e" Nov 25 10:30:49 crc kubenswrapper[4760]: E1125 10:30:49.156675 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e\": container with ID starting with 276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e not found: ID does not exist" containerID="276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e" Nov 25 10:30:49 crc kubenswrapper[4760]: I1125 10:30:49.156711 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e"} err="failed to get container status \"276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e\": rpc error: code = NotFound desc = could not find container \"276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e\": container with ID starting with 276ed05705d70553ba0a729fb80aa862db30a2abef6be302f460bae4b8fd318e not found: ID does not exist" Nov 25 10:30:52 crc kubenswrapper[4760]: I1125 10:30:52.939304 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:30:53 crc kubenswrapper[4760]: E1125 10:30:52.940072 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:31:04 crc kubenswrapper[4760]: I1125 10:31:04.942615 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:31:04 crc kubenswrapper[4760]: E1125 10:31:04.943497 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:31:15 crc kubenswrapper[4760]: I1125 10:31:15.939404 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:31:15 crc kubenswrapper[4760]: E1125 10:31:15.940285 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:31:28 crc kubenswrapper[4760]: I1125 10:31:28.938687 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:31:28 crc kubenswrapper[4760]: E1125 10:31:28.939549 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:31:37 crc kubenswrapper[4760]: I1125 10:31:37.518996 4760 scope.go:117] "RemoveContainer" containerID="26a4dadfaee4a3c92f96dd39e692b16909e50b290fcf3eb479fc75ea4986e6f5" Nov 25 10:31:40 crc kubenswrapper[4760]: I1125 10:31:40.943515 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:31:40 crc kubenswrapper[4760]: E1125 10:31:40.944328 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:31:51 crc kubenswrapper[4760]: I1125 10:31:51.939120 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:31:51 crc kubenswrapper[4760]: E1125 10:31:51.940559 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fcnxs_openshift-machine-config-operator(2f5c9247-0023-4cef-8299-ca90407f76f2)\"" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" Nov 25 10:31:54 crc kubenswrapper[4760]: E1125 10:31:54.938615 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:32:03 crc kubenswrapper[4760]: I1125 10:32:03.938262 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:32:04 crc kubenswrapper[4760]: I1125 10:32:04.766058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"0e51d9a8566f976db407638c0a6ff4ad7bd614e35271972accab9999c8af6e38"} Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.545381 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4rl8"] Nov 25 10:32:46 crc kubenswrapper[4760]: E1125 10:32:46.546327 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerName="copy" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.546341 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerName="copy" Nov 25 10:32:46 crc kubenswrapper[4760]: E1125 10:32:46.546372 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerName="gather" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.546378 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerName="gather" Nov 25 10:32:46 crc kubenswrapper[4760]: E1125 10:32:46.546394 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab91276f-db4b-47ea-8067-86b233d7bc81" containerName="collect-profiles" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.546401 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab91276f-db4b-47ea-8067-86b233d7bc81" containerName="collect-profiles" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.546581 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab91276f-db4b-47ea-8067-86b233d7bc81" containerName="collect-profiles" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.546609 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerName="gather" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.546625 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="476479c3-79d3-4f4a-92c6-95e623dddb3d" containerName="copy" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.548100 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.563206 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4rl8"] Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.666354 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-utilities\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.666457 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mzgb\" (UniqueName: \"kubernetes.io/projected/13d3b4e1-b443-4849-acb7-89eef289d67e-kube-api-access-8mzgb\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.666546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-catalog-content\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.768042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-utilities\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.768152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mzgb\" (UniqueName: \"kubernetes.io/projected/13d3b4e1-b443-4849-acb7-89eef289d67e-kube-api-access-8mzgb\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.768279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-catalog-content\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.768530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-utilities\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.768814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-catalog-content\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.801998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mzgb\" (UniqueName: \"kubernetes.io/projected/13d3b4e1-b443-4849-acb7-89eef289d67e-kube-api-access-8mzgb\") pod \"certified-operators-r4rl8\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:46 crc kubenswrapper[4760]: I1125 10:32:46.881335 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:47 crc kubenswrapper[4760]: I1125 10:32:47.438627 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4rl8"] Nov 25 10:32:48 crc kubenswrapper[4760]: I1125 10:32:48.206042 4760 generic.go:334] "Generic (PLEG): container finished" podID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerID="58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb" exitCode=0 Nov 25 10:32:48 crc kubenswrapper[4760]: I1125 10:32:48.206118 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4rl8" event={"ID":"13d3b4e1-b443-4849-acb7-89eef289d67e","Type":"ContainerDied","Data":"58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb"} Nov 25 10:32:48 crc kubenswrapper[4760]: I1125 10:32:48.206155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4rl8" event={"ID":"13d3b4e1-b443-4849-acb7-89eef289d67e","Type":"ContainerStarted","Data":"79f2c04e026f64eee87872534a9f987295e2fba5d94a7055a8160f5c0d99b6a5"} Nov 25 10:32:50 crc kubenswrapper[4760]: I1125 10:32:50.224800 4760 generic.go:334] "Generic (PLEG): container finished" podID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerID="ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde" exitCode=0 Nov 25 10:32:50 crc kubenswrapper[4760]: I1125 10:32:50.224908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4rl8" event={"ID":"13d3b4e1-b443-4849-acb7-89eef289d67e","Type":"ContainerDied","Data":"ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde"} Nov 25 10:32:51 crc kubenswrapper[4760]: I1125 10:32:51.249116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4rl8" event={"ID":"13d3b4e1-b443-4849-acb7-89eef289d67e","Type":"ContainerStarted","Data":"3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f"} Nov 25 10:32:51 crc kubenswrapper[4760]: I1125 10:32:51.275215 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4rl8" podStartSLOduration=2.591255824 podStartE2EDuration="5.275196052s" podCreationTimestamp="2025-11-25 10:32:46 +0000 UTC" firstStartedPulling="2025-11-25 10:32:48.208739578 +0000 UTC m=+8501.917770373" lastFinishedPulling="2025-11-25 10:32:50.892679806 +0000 UTC m=+8504.601710601" observedRunningTime="2025-11-25 10:32:51.268089959 +0000 UTC m=+8504.977120754" watchObservedRunningTime="2025-11-25 10:32:51.275196052 +0000 UTC m=+8504.984226847" Nov 25 10:32:56 crc kubenswrapper[4760]: I1125 10:32:56.882266 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:56 crc kubenswrapper[4760]: I1125 10:32:56.884220 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:56 crc kubenswrapper[4760]: I1125 10:32:56.934043 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:57 crc kubenswrapper[4760]: I1125 10:32:57.360675 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:57 crc kubenswrapper[4760]: I1125 10:32:57.415186 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4rl8"] Nov 25 10:32:59 crc kubenswrapper[4760]: I1125 10:32:59.333128 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4rl8" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="registry-server" containerID="cri-o://3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f" gracePeriod=2 Nov 25 10:32:59 crc kubenswrapper[4760]: I1125 10:32:59.819943 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:32:59 crc kubenswrapper[4760]: I1125 10:32:59.961540 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-catalog-content\") pod \"13d3b4e1-b443-4849-acb7-89eef289d67e\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " Nov 25 10:32:59 crc kubenswrapper[4760]: I1125 10:32:59.961715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-utilities\") pod \"13d3b4e1-b443-4849-acb7-89eef289d67e\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " Nov 25 10:32:59 crc kubenswrapper[4760]: I1125 10:32:59.961888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mzgb\" (UniqueName: \"kubernetes.io/projected/13d3b4e1-b443-4849-acb7-89eef289d67e-kube-api-access-8mzgb\") pod \"13d3b4e1-b443-4849-acb7-89eef289d67e\" (UID: \"13d3b4e1-b443-4849-acb7-89eef289d67e\") " Nov 25 10:32:59 crc kubenswrapper[4760]: I1125 10:32:59.963093 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-utilities" (OuterVolumeSpecName: "utilities") pod "13d3b4e1-b443-4849-acb7-89eef289d67e" (UID: "13d3b4e1-b443-4849-acb7-89eef289d67e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:32:59 crc kubenswrapper[4760]: I1125 10:32:59.967471 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d3b4e1-b443-4849-acb7-89eef289d67e-kube-api-access-8mzgb" (OuterVolumeSpecName: "kube-api-access-8mzgb") pod "13d3b4e1-b443-4849-acb7-89eef289d67e" (UID: "13d3b4e1-b443-4849-acb7-89eef289d67e"). InnerVolumeSpecName "kube-api-access-8mzgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.064662 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.064787 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mzgb\" (UniqueName: \"kubernetes.io/projected/13d3b4e1-b443-4849-acb7-89eef289d67e-kube-api-access-8mzgb\") on node \"crc\" DevicePath \"\"" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.349281 4760 generic.go:334] "Generic (PLEG): container finished" podID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerID="3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f" exitCode=0 Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.349438 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4rl8" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.349389 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4rl8" event={"ID":"13d3b4e1-b443-4849-acb7-89eef289d67e","Type":"ContainerDied","Data":"3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f"} Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.349961 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4rl8" event={"ID":"13d3b4e1-b443-4849-acb7-89eef289d67e","Type":"ContainerDied","Data":"79f2c04e026f64eee87872534a9f987295e2fba5d94a7055a8160f5c0d99b6a5"} Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.350034 4760 scope.go:117] "RemoveContainer" containerID="3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.374798 4760 scope.go:117] "RemoveContainer" containerID="ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.393886 4760 scope.go:117] "RemoveContainer" containerID="58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.450477 4760 scope.go:117] "RemoveContainer" containerID="3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f" Nov 25 10:33:00 crc kubenswrapper[4760]: E1125 10:33:00.450962 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f\": container with ID starting with 3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f not found: ID does not exist" containerID="3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.451010 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f"} err="failed to get container status \"3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f\": rpc error: code = NotFound desc = could not find container \"3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f\": container with ID starting with 3fe28d4e3261261e312fe14e22756a6416c3716ef5a1e6f8177b5ae34204d82f not found: ID does not exist" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.451037 4760 scope.go:117] "RemoveContainer" containerID="ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde" Nov 25 10:33:00 crc kubenswrapper[4760]: E1125 10:33:00.451534 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde\": container with ID starting with ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde not found: ID does not exist" containerID="ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.451589 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde"} err="failed to get container status \"ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde\": rpc error: code = NotFound desc = could not find container \"ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde\": container with ID starting with ad642a980403cdee624ce089592a710a7e3ab1d4ba66aa659ec9c7d626671fde not found: ID does not exist" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.451622 4760 scope.go:117] "RemoveContainer" containerID="58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb" Nov 25 10:33:00 crc kubenswrapper[4760]: E1125 10:33:00.452070 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb\": container with ID starting with 58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb not found: ID does not exist" containerID="58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.452106 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb"} err="failed to get container status \"58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb\": rpc error: code = NotFound desc = could not find container \"58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb\": container with ID starting with 58872b63ff702f471f202320f30c609bc3dd337d2f6f6d91a951c34db600f1bb not found: ID does not exist" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.653878 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13d3b4e1-b443-4849-acb7-89eef289d67e" (UID: "13d3b4e1-b443-4849-acb7-89eef289d67e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.679869 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d3b4e1-b443-4849-acb7-89eef289d67e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.989694 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4rl8"] Nov 25 10:33:00 crc kubenswrapper[4760]: I1125 10:33:00.998568 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4rl8"] Nov 25 10:33:02 crc kubenswrapper[4760]: I1125 10:33:02.950168 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" path="/var/lib/kubelet/pods/13d3b4e1-b443-4849-acb7-89eef289d67e/volumes" Nov 25 10:33:09 crc kubenswrapper[4760]: E1125 10:33:09.939357 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.943539 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56fps"] Nov 25 10:33:43 crc kubenswrapper[4760]: E1125 10:33:43.944852 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="extract-content" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.944872 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="extract-content" Nov 25 10:33:43 crc kubenswrapper[4760]: E1125 10:33:43.944892 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="registry-server" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.944900 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="registry-server" Nov 25 10:33:43 crc kubenswrapper[4760]: E1125 10:33:43.944944 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="extract-utilities" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.944954 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="extract-utilities" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.945273 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d3b4e1-b443-4849-acb7-89eef289d67e" containerName="registry-server" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.946702 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.961146 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56fps"] Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.989275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96n9f\" (UniqueName: \"kubernetes.io/projected/4da31243-c00e-4706-a1ff-64bdf24aa4f1-kube-api-access-96n9f\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.989379 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da31243-c00e-4706-a1ff-64bdf24aa4f1-utilities\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:43 crc kubenswrapper[4760]: I1125 10:33:43.989458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da31243-c00e-4706-a1ff-64bdf24aa4f1-catalog-content\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.091525 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96n9f\" (UniqueName: \"kubernetes.io/projected/4da31243-c00e-4706-a1ff-64bdf24aa4f1-kube-api-access-96n9f\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.091607 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da31243-c00e-4706-a1ff-64bdf24aa4f1-utilities\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.091660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da31243-c00e-4706-a1ff-64bdf24aa4f1-catalog-content\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.092292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4da31243-c00e-4706-a1ff-64bdf24aa4f1-catalog-content\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.092355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4da31243-c00e-4706-a1ff-64bdf24aa4f1-utilities\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.112520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96n9f\" (UniqueName: \"kubernetes.io/projected/4da31243-c00e-4706-a1ff-64bdf24aa4f1-kube-api-access-96n9f\") pod \"redhat-operators-56fps\" (UID: \"4da31243-c00e-4706-a1ff-64bdf24aa4f1\") " pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.268780 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.810152 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56fps"] Nov 25 10:33:44 crc kubenswrapper[4760]: I1125 10:33:44.890612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fps" event={"ID":"4da31243-c00e-4706-a1ff-64bdf24aa4f1","Type":"ContainerStarted","Data":"3b2c190cb352204133574473dd6c3cbf616ade323c8494e22f921596bbfa4bd7"} Nov 25 10:33:45 crc kubenswrapper[4760]: I1125 10:33:45.905478 4760 generic.go:334] "Generic (PLEG): container finished" podID="4da31243-c00e-4706-a1ff-64bdf24aa4f1" containerID="414a818265ec8979151c9761e468ec9c7fd58775163845a55c953d8dd69bb2bd" exitCode=0 Nov 25 10:33:45 crc kubenswrapper[4760]: I1125 10:33:45.905714 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fps" event={"ID":"4da31243-c00e-4706-a1ff-64bdf24aa4f1","Type":"ContainerDied","Data":"414a818265ec8979151c9761e468ec9c7fd58775163845a55c953d8dd69bb2bd"} Nov 25 10:33:45 crc kubenswrapper[4760]: I1125 10:33:45.909512 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 10:33:55 crc kubenswrapper[4760]: I1125 10:33:55.013314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fps" event={"ID":"4da31243-c00e-4706-a1ff-64bdf24aa4f1","Type":"ContainerStarted","Data":"e4e52d7feb31f2d8cde8ed7603fbaf0677dad896edfc750377a4100f68e39898"} Nov 25 10:33:57 crc kubenswrapper[4760]: I1125 10:33:57.034924 4760 generic.go:334] "Generic (PLEG): container finished" podID="4da31243-c00e-4706-a1ff-64bdf24aa4f1" containerID="e4e52d7feb31f2d8cde8ed7603fbaf0677dad896edfc750377a4100f68e39898" exitCode=0 Nov 25 10:33:57 crc kubenswrapper[4760]: I1125 10:33:57.035102 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fps" event={"ID":"4da31243-c00e-4706-a1ff-64bdf24aa4f1","Type":"ContainerDied","Data":"e4e52d7feb31f2d8cde8ed7603fbaf0677dad896edfc750377a4100f68e39898"} Nov 25 10:33:59 crc kubenswrapper[4760]: I1125 10:33:59.053883 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fps" event={"ID":"4da31243-c00e-4706-a1ff-64bdf24aa4f1","Type":"ContainerStarted","Data":"b8dc580cbe38ca80fbf80b96d686b301629c2a59a1ce33ea05d4c9dffce64675"} Nov 25 10:33:59 crc kubenswrapper[4760]: I1125 10:33:59.074749 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56fps" podStartSLOduration=4.175015532 podStartE2EDuration="16.074728191s" podCreationTimestamp="2025-11-25 10:33:43 +0000 UTC" firstStartedPulling="2025-11-25 10:33:45.909232262 +0000 UTC m=+8559.618263057" lastFinishedPulling="2025-11-25 10:33:57.808944871 +0000 UTC m=+8571.517975716" observedRunningTime="2025-11-25 10:33:59.068649348 +0000 UTC m=+8572.777680153" watchObservedRunningTime="2025-11-25 10:33:59.074728191 +0000 UTC m=+8572.783758996" Nov 25 10:34:04 crc kubenswrapper[4760]: I1125 10:34:04.269195 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:34:04 crc kubenswrapper[4760]: I1125 10:34:04.269951 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:34:04 crc kubenswrapper[4760]: I1125 10:34:04.318293 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:34:05 crc kubenswrapper[4760]: I1125 10:34:05.161615 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56fps" Nov 25 10:34:05 crc kubenswrapper[4760]: I1125 10:34:05.230408 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56fps"] Nov 25 10:34:05 crc kubenswrapper[4760]: I1125 10:34:05.281389 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8skdl"] Nov 25 10:34:05 crc kubenswrapper[4760]: I1125 10:34:05.281673 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8skdl" podUID="b7304e75-6f0d-481d-8fbc-5de0e061032d" containerName="registry-server" containerID="cri-o://1e9ff90afd0276a4a4143c71e255cc7fa37b8d56d49b03cbf206eabaad74b26d" gracePeriod=2 Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.127048 4760 generic.go:334] "Generic (PLEG): container finished" podID="b7304e75-6f0d-481d-8fbc-5de0e061032d" containerID="1e9ff90afd0276a4a4143c71e255cc7fa37b8d56d49b03cbf206eabaad74b26d" exitCode=0 Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.128019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8skdl" event={"ID":"b7304e75-6f0d-481d-8fbc-5de0e061032d","Type":"ContainerDied","Data":"1e9ff90afd0276a4a4143c71e255cc7fa37b8d56d49b03cbf206eabaad74b26d"} Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.614130 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.718692 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf7j6\" (UniqueName: \"kubernetes.io/projected/b7304e75-6f0d-481d-8fbc-5de0e061032d-kube-api-access-kf7j6\") pod \"b7304e75-6f0d-481d-8fbc-5de0e061032d\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.718854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-utilities\") pod \"b7304e75-6f0d-481d-8fbc-5de0e061032d\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.718969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-catalog-content\") pod \"b7304e75-6f0d-481d-8fbc-5de0e061032d\" (UID: \"b7304e75-6f0d-481d-8fbc-5de0e061032d\") " Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.725991 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7304e75-6f0d-481d-8fbc-5de0e061032d-kube-api-access-kf7j6" (OuterVolumeSpecName: "kube-api-access-kf7j6") pod "b7304e75-6f0d-481d-8fbc-5de0e061032d" (UID: "b7304e75-6f0d-481d-8fbc-5de0e061032d"). InnerVolumeSpecName "kube-api-access-kf7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.733629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-utilities" (OuterVolumeSpecName: "utilities") pod "b7304e75-6f0d-481d-8fbc-5de0e061032d" (UID: "b7304e75-6f0d-481d-8fbc-5de0e061032d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.820959 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:06 crc kubenswrapper[4760]: I1125 10:34:06.820999 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf7j6\" (UniqueName: \"kubernetes.io/projected/b7304e75-6f0d-481d-8fbc-5de0e061032d-kube-api-access-kf7j6\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.139633 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8skdl" event={"ID":"b7304e75-6f0d-481d-8fbc-5de0e061032d","Type":"ContainerDied","Data":"0c8381eba06f54665a16e53aee3e5b85729548dc00b50ec6ab0c7e352774a2cb"} Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.139707 4760 scope.go:117] "RemoveContainer" containerID="1e9ff90afd0276a4a4143c71e255cc7fa37b8d56d49b03cbf206eabaad74b26d" Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.139707 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8skdl" Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.250171 4760 scope.go:117] "RemoveContainer" containerID="d66ae8524eb9cfc9f95235391796b05c183f86405cd397c0231f193ea0423c28" Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.376590 4760 scope.go:117] "RemoveContainer" containerID="6a00914b25a9eb2add652e1f4ad95168169034985f53ea5a9b773914f49e724e" Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.624852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7304e75-6f0d-481d-8fbc-5de0e061032d" (UID: "b7304e75-6f0d-481d-8fbc-5de0e061032d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.640141 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7304e75-6f0d-481d-8fbc-5de0e061032d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.772360 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8skdl"] Nov 25 10:34:07 crc kubenswrapper[4760]: I1125 10:34:07.784178 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8skdl"] Nov 25 10:34:08 crc kubenswrapper[4760]: I1125 10:34:08.950890 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7304e75-6f0d-481d-8fbc-5de0e061032d" path="/var/lib/kubelet/pods/b7304e75-6f0d-481d-8fbc-5de0e061032d/volumes" Nov 25 10:34:31 crc kubenswrapper[4760]: I1125 10:34:31.746387 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:34:31 crc kubenswrapper[4760]: I1125 10:34:31.747496 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:34:39 crc kubenswrapper[4760]: E1125 10:34:39.938315 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 10:35:01 crc kubenswrapper[4760]: I1125 10:35:01.746432 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:35:01 crc kubenswrapper[4760]: I1125 10:35:01.746899 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:35:31 crc kubenswrapper[4760]: I1125 10:35:31.746536 4760 patch_prober.go:28] interesting pod/machine-config-daemon-fcnxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 10:35:31 crc kubenswrapper[4760]: I1125 10:35:31.747335 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 10:35:31 crc kubenswrapper[4760]: I1125 10:35:31.747389 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" Nov 25 10:35:31 crc kubenswrapper[4760]: I1125 10:35:31.748179 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e51d9a8566f976db407638c0a6ff4ad7bd614e35271972accab9999c8af6e38"} pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 10:35:31 crc kubenswrapper[4760]: I1125 10:35:31.748230 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" podUID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerName="machine-config-daemon" containerID="cri-o://0e51d9a8566f976db407638c0a6ff4ad7bd614e35271972accab9999c8af6e38" gracePeriod=600 Nov 25 10:35:32 crc kubenswrapper[4760]: I1125 10:35:32.074642 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f5c9247-0023-4cef-8299-ca90407f76f2" containerID="0e51d9a8566f976db407638c0a6ff4ad7bd614e35271972accab9999c8af6e38" exitCode=0 Nov 25 10:35:32 crc kubenswrapper[4760]: I1125 10:35:32.074739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerDied","Data":"0e51d9a8566f976db407638c0a6ff4ad7bd614e35271972accab9999c8af6e38"} Nov 25 10:35:32 crc kubenswrapper[4760]: I1125 10:35:32.075438 4760 scope.go:117] "RemoveContainer" containerID="8b24e595cfd12fadfff814f76d0e5f7d4ab2599ac53bdcc4d0f26f4523afd905" Nov 25 10:35:33 crc kubenswrapper[4760]: I1125 10:35:33.087834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fcnxs" event={"ID":"2f5c9247-0023-4cef-8299-ca90407f76f2","Type":"ContainerStarted","Data":"eac9646dbc119a5f927c566ec4a37c4e0f9590a2dac92aaa417d601cb59b79d8"} Nov 25 10:36:04 crc kubenswrapper[4760]: E1125 10:36:04.939061 4760 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes"